Search Results

Search found 4993 results on 200 pages for 'experts direct'.

Page 116/200 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • Add an entry for Ubuntu on Windows 8 boot loader

    - by John
    I have installed Ubuntu 12.10 by creating free space in Windows 8 and then using that space to create 3 partitions, one for SWAP, one for GRUB (mounting point is /boot) and one for the actual OS. I did this so the Windows 8 boot loader wouldn't be overwritten in case I ever wanted to remove Ubuntu. I can still boot into Ubuntu if I select the boot loader from the BIOS. I want to add Ubuntu to the Windows 8 boot loader and I've been told to use EasyBCD. The issue with that is it doesn't actually direct Windows to the GRUB file, but rather to something like autogrub0.mri. I have found another programme called Visual BCD which will allow me to actually set the bootloader paths and drives. From here, I don't quite know what to do. I believe I have it set to the correct drive but I don't know if I'm directing to the right file. I think it's /boot/grub/x86_64-efi/grub.efi. I don't know if that's the right file, if I need to remove /boot or if the / need to be \ as that's what Windows uses. Sorry for such a lengthy post, please help!

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Twinview broken on upgrade to ubuntu 10.10

    - by mapkyca
    I have been on 9.10 for over a year on the grounds that if it ain't broke, don't fix it. However, I had a spare weekend and figured it was probably about time... I performed an upgrade to 10.4, and everything seemed to proceed smoothly, so I took the plunge and went for 10.10. Disaster. My twinview Nvidia display which had been working perfectly is now broken. On boot everything seems fine, but when X starts and the second monitor springs into life the primary winks out and switches off - almost as if its been put into an unsupported display mode. The system seems to think there's a second monitor - the nvidia logo is split across the two screens, but it can't seem to start. Things I've tried: Swapping the monitors (one is older than the other, and its definitely the port not the actual monitor) Rolling back to an old Xorg conf from prior to the upgrade Installing a non-beta driver direct from Nvidia (this seems to start both monitors but then apparently stops boot and causes the second display to 'wink'. Twinview seems non-functional, both displays are mirrors) Disabling EDID Disabling twinview, logging in and attempting to use the Nvidia config to re-detect the monitors (second monitor is falsely detected and won't go higher than 1024x768. Selecting 'apply' causes one screen to go blank and the other to display garbage) googling for about 5 hours looking for similar problems, none of the offered solutions seemed to work I'm at a loss, and it is looking very much like I'm going to have to go through a time consuming reinstall to downgrade back to the working 10.4. Any thoughts?

    Read the article

  • Best way to protect website application code

    - by Gaz_Edge
    Background I have a web application that I host on my own server. I have clients who use the application as is, but some have asked if they can host the application on their own server. This enables them to have their own URLS rather than mine. The application only forms part of their website so I'm assuming it will not be possible for my server to respond to a direct call to their domain etc To give some examples, i currently have urls like www.mydomain.com/profile, www.mydomain.com/index.php?option=someoption&view=someview&id=1 What my clients' want is www.theirdomian.com/profile, www.theirdomian.com/index.php?option=someoption&view=someview&id=1 etc Question My question is, what is the best way for me to allow them to use their own URLs with my application, without giving them all the backend source code and databases to install on their server? One way I thought would be to create a router.php file that sits on their server. The router then asks my server to output the html. The router modifies all the links etc in the html source and outputs the new html through the clients server. When a link is clicked on the clients site, the router receives the request and modifies the url to get the data from my server etc. Is this an effective way to achieve what I want, or is it way off the mark.

    Read the article

  • How do I 'see' an external USB drive connected directly to my Broadband Router?

    - by The Cougar Kid
    This is a very frustrating problem! I have a small home network with several dual boot Ubuntu / Windows computers. I have recently upgraded my Broadband connection and the new router permits the direct attachment of an external USB drive which can back up all of the household's computers. There are no problems when booted under Windows, and there were no problems with older versions of UBUNTU, but since upgrading to 11.10 I can no longer "see" the drive. I used to find it via Network / Windows Network / Home / name of Router, but under 11.10 the same method yields an error message Unable to mount location Failed to retrieve share list from server. Can anyone help please? Starting Nmap 5.21 ( http://nmap.org ) at 2011-12-21 10:06 GMT Stats: 0:02:02 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan Service scan Timing: About 50.00% done; ETC: 10:10 (0:01:56 remaining) Nmap scan report for 192.168.1.254 Host is up (0.0097s latency). Not shown: 998 filtered ports PORT STATE SERVICE VERSION 554/tcp open rtsp? 7070/tcp open realserver? Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 152.38 seconds sudo tail -n 30 /var/log/syslog [sudo] password for alaric: Dec 21 10:05:42 UPSTAIRS2U wpa_supplicant[882]: WPA: Group rekeying completed with 00:01:3b:8b:63:1a [GTK=TKIP]

    Read the article

  • With Google Analytics, is it possible to check a specific page in Multi-Channel conversion attribution?

    - by Emmett R.
    I'm somewhat new to Google Analytics, and I'm trying to track all conversions that are assisted by a particular landing page, because I don't expect an instant purchase. I have e-commerce tracking set up. Due to the constraints of the associated ad campaign, I can't include the source/medium code in the url when people go to the landing page, and all of my traffic to the landing page is likely to be direct, so I'm not sure how to tell Multi-Channel marketing that it's a significant page. I know how to add events to a page, but I'm still figuring out what they can and cannot do. Would creating a redirect from the landing url to an identical url+source/medium code work? Any advice on how to accomplish this would be greatly appreciated. Tracking the final sale conversion is not the issue. Ecommerce reporting is functioning just fine on the site. I just want to report the landing page as an assist, whenever it shows up in the funnel, and I need to be able to do that across multiple visits.

    Read the article

  • Is it worth replacing mouse by standalone trackpad for heavy code-editing? [on hold]

    - by heltonbiker
    I recently got more interested in improving my tools, workspace and worflow. The first sting came with a sore finger due to a crappy keyboard, and then after some research I fell in love with the "mechanical keyboard is what you need" doctrine, bought one (cherry MX Brown if you're curious), and am very happy with the results. Currently I am replacing my previous text editor (Geany) with Sublime Text 3, and am also very happy and feeling much more powerful and professional :) Well, but while I re-read all the ancient debates about VIM vs whatever-else, the following excerpt from a blog post got me thinking again about the mouse vs keyboard, and the "moving around from the very home row" (in VIM) versus gesturing away with the tiny and unstable mouse cursor: Reaching for a mouse may indeed slow you down, but developers are commonly on machines where the trackpad is a micro-hand movement away. Most novice programmers can click on a character on screen faster than an expert Vimmer can type 20jFp; or LkEEE or /word or any other nasty way Vimmers have to use. The point of a mouse is to make arbitrary on screen jumps efficient, and it’s very good at doing that. Don’t you ever think you can beat a mouse. Well, although there is some bitterness in this statement, it makes a lot of sense, and EVEN MORE if you consider your direct input to be a TRACKPAD conveniently placed in front of your spacebar (which oddly is where I like to put my mouse, rotated 90° ccw, due to a serious tendonitis in my right shoulder, already healed, but you knod...). So, the question is: Has anyone replaced mouse by a standalone trackpad, to work in code editing in a desktop machine (that is, with a sandalone keyboard)? Was it worth the change?

    Read the article

  • The Evolution of Computer Keyboards

    - by Jason Fitzpatrick
    While the basic shape of keyboards has remained largely unchanged over the last thirty years, the guts have undergone several transformations. Read on to explore the history of the computer keyboard. ComputerWorld delves into the history of the modern keyboard, including the heavy influence IBM’s extensive keyboard research on early keyboards: As far as direct influences on the modern computer keyboard, IBM’s Selectric typewriter was one of the biggest. IBM released the first model of its iconic electromechanical typewriter in 1961, a time when being able to type fast and accurately was a highly sought-after skill. Dag Spicer, senior curator at the Computer History Museum, notes that as the Selectric models rose to prominence, admins grew to love the feel of the keyboard because of IBM’s dogged focus on making the ergonomics comfortable. “IBM’s probably done more than anyone to find [keyboard] ergonomics that work for everyone,” Spicer says. So when the PC hit the scene a decade or two later, the Selectric was largely viewed as the baseline to design keyboards for those newfangled computers you could put in your office or home. Hit up the link below to continue reading about how the Selectric influenced keyboards throughout the 1980s and what replaced the crisp clacking of early IBM-styled models. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Do you feel bad when you have to learn new things?

    - by tactoth
    New thing is not always cool. I see many people say they are very bored by doing the similar things day after day. For me it's the opposite - I'm always learning something new. During the last one and a harf year, nearly every two months I need to do lots of researches on a totally new topic: RTMP, MP4, SIP, VNC, Smooth streaming, ..., I have to read lots of specifications, download tones of open source projects to understand concepts, and turn them into my runnable code. And it was so bad! My brain has never been very sure and very familiar with anything, and when it's close to be sure and familiar, it'll have to switch to next thing. I kind of envy people who build upper level applications because they can be very focusing, and their knowledge set includes most things their job requires. Everything is quite measurable, direct and straightforward. Have you ever had the similar feeling? I'm thinking of asking my boss to assign me some other piece of work so that I work like moving forward on a broad road instead of figuring out a way in the dark, I think it'll be more relaxing, any suggestion?

    Read the article

  • Is C# development effectively inseparable from the IDE you use?

    - by Ghopper21
    I'm a Python programmer learning C# who is trying to stop worrying and just love C# for what it is, rather than constantly comparing it back to Python. I'm really get caught up on one point: the lack of explicitness about where things are defined, as detailed in this Stack Overflow question. In short: in C#, using foo doesn't tell you what names from foo are being made available, which is analogous to from foo import * in Python -- a form that is discouraged within Python coding culture for being implicit rather than the more explicit approach of from foo import bar. I was rather struck by the Stack Overflow answers to this point from C# programmers, which was that in practice this lack of explicitness doesn't really matter because in your IDE (presumably Visual Studio) you can just hover over a name and be told by the system where the name is coming from. E.g.: Now, in theory I realise this means when you're looking with a text editor, you can't tell where the types come from in C#... but in practice, I don't find that to be a problem. How often are you actually looking at code and can't use Visual Studio? This is revelatory to me. Many Python programmers prefer a text editor approach to coding, using something like Sublime Text 2 or vim, where it's all about the code, plus command line tools and direct access and manipulation of folders and files. The idea of being dependent on an IDE to understand code at such a basic level seems anathema. It seems C# culture is radically different on this point. And I wonder if I just need to accept and embrace that as part of my learning of C#. Which leads me to my question here: is C# development effectively inseparable from the IDE you use?

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • Loading images in XNA 4.0; "Cannot Open File" Problems

    - by user32623
    Okay, I'm writing a game in C#/XNA 4.0 and am utterly stumped at my current juncture: Sprite animation. I understand how it works and have all the code in place, but my ContentLoader won't open my file... Basically, my directory looks like this: //WindowsGame1 - "Game1.cs" - //Classes - "NPC.cs" - Content Reference - //Images - "Monster.png" Inside my NPC class, I have all the essential drawing functions, i.e. LoadContent, Draw, Update. And I can get the game to find the correct file and attempt to open it, but when it tries, it throws an exception and tells me it can't open the file. This is how my code in my NPC class looks: Texture2D NPCImage; Vector2 NPCPosition; Animation NPCAnimation = new Animation(); public void Initialize() { NPCAnimation.Initialize(NPCPosition, new Vector2(4, 4)); } public void LoadContent(ContentManager Content) { NPCImage = Content.Load<Texture2D>("_InsertImageFilePathHere_"); NPCAnimation.AnimationImage = NPCImage; } The rest of the code is irrelevant at this point because I can't even get the image to load. I think it might have to do with a directory problem, but I also know little to nothing about spriting or working with images or animations in my code. Any help is appreciated. Not sure if I provided enough information here, so let me know if more is needed! Also, what would be the correct way to direct that Content.Load to Monster.png given the current directory situation? Right now I just have it using the full path from the C:// drive. Thanks in advance!

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • How Do You Get Answers?

    - by Grant Fritchey
    We all learn differently. Some people really prefer to sit with a book. Others need class room time, with an instructor. Still others just want to do things themselves until they figure them out. And sometimes, it's all of the above. Since we all learn differently, is it any surprise that we ask questions differently? Some of us are going to want to phrase a very focused question that should yield a limited and concise answer. Others are going to want to have a long involved discussion with possible variations. There are those among us who will want to know whether or not their peers agree with a given answer. Still more are going to want to see lots of points of view dished out so they can understand things in a different way. Many of us want to know when a question has been asked that there is a hard, well defined answer. The rest are willing to read through a bevy of answers and discussion, deciding for ourselves who has come up with the best solution. Where am I going with this? Excellent question. Since we all ask questions in different ways isn't it great that we have places to go that let us ask questions and get answers in a way that's best suited to our individual preferences. Do you want the long-running discussion format? Then you should be hanging out on the forums over at SQL Server Central. Do you want specific answers with direct peer evaluation of the strength of those answers? Then you should be hanging out on the forums over at Ask SQL Server Central. You can get answers to your questions, and do it in a way that's most comfortable for you.

    Read the article

  • Continuous Retraining Tutorials

    - by foampile
    I am looking for an online resource in which you can sortof design your future professional profile and it would provide you a set of tutorials that you would complete to get a basic level of familiarity with related technologies. One of my professional problems is my learning style: I can learn either by direct hands-on experience OR by following a rigid training program that goes in a linear progression. I have a hard time learning things in a multidimensional environment where the biggest challenge is to determine what needs to be learned and how to pick from a ton of books and the least problem is to go through the actual material. So I am looking for a reputable source that will knock those two confusing questions out for me so I can kick back and continuously be upgrading my skills without having to worry about what and how myself. I have found some decent online tutorials for various technologies but never found a single place that has all or most developer education tutorials that all follow the same or similar interface. I am kindof a lazy learner and would rather follow confirmed learning steps than be figuring my own education path just to realize I did it all wrong down the road. Is there a tutorial mega-boutique like that online?

    Read the article

  • What to do if you find a vulnerability in a competitor's site?

    - by user17610
    While working on a project for my company, I needed to build functionality that allows users to import/export data to/from our competitor's site. While doing this, I discovered a very serious security exploit that could, in short, perform any script on the competitor's website. My natural feeling is to report the issue to them in the spirit of good-will. Exploiting the issue to gain advantage crossed my mind, but I don't want to go down that path. So my question is, would you report a serious vulnerability to your direct competition, in order to help them? Or would you keep your mouth shut? Is there a better way of going about this, perhaps to gain at least some advantage from the fact that I'm helping them by reporting the issue? Update (Clarification): Thanks for all your feedback so far, I appreciate it. Would your answers change if I were to add that the competition in question is a behemoth in the market (hundreds of employees in several continents), and my company only started a few weeks ago (three employees)? It goes without saying, they most definitely will not remember us, and if anything, only realize that their site needs work (which is why we entered this market in the first place). I confess this is one of those moral vs. business toss-ups, but I appreciate all the advice.

    Read the article

  • Is SEO affected negatively by having densely encoded identifiers of content in URLs?

    - by casperOne
    This isn't about where to put the id of a piece of unique content in URLs, but more about densely packing the URL (or, does it just not matter). Take for example, a hypothetical post in a blog: http://tempuri.org/123456789/seo-friendly-title The ID that uniquely identifies this is 123456789. This corresponds to a look-up and is the direct key in the underlying data store. However, I could encode that in say, hexadecimal, like so: http://tempuri.org/75bcd15/seo-friendly-title And that would be shorter. One could take it even further and have more compact encodings; since URLs are case sensitive, one could imagine an encoding that uses numbers, lowercase and uppercase letters, for a base of 62 (26 upper case + 26 lower case + 10 digits): 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz For a resulting URL of: http://tempuri.org/8M0kX/seo-friendly-title The question is, does densely packing the ID of the content (the requirement is that an ID is mandatory for look-ups) have a negative impact on SEO (and dare I ask, might it have any positive impact), or is it just not worth the time? Note that this is not for a URL shortening service, so saving space in the URL for browser limitation purposes is not an issue.

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • Client-server application design issue

    - by user2547823
    I have a collection of clients on server's side. And there are some objects that need to work with that collection - adding and removing clients, sending message to them, updating connection settings and so on. They should perform these actions simultaneously, so mutex or another synchronization primitive is required. I want to share one instance of collection between these objects, but all of them require access to private fields of collection. I hope that code sample makes it more clear[C++]: class Collection { std::vector< Client* > clients; Mutex mLock; ... } class ClientNotifier { void sendMessage() { mLock.lock(); // loop over clients and send message to each of them } } class ConnectionSettingsUpdater { void changeSettings( const std::string& name ) { mLock.lock(); // if client with this name is inside collection, change its settings } } As you can see, all these classes require direct access to Collection's private fields. Can you give me an advice about how to implement such behaviour correctly, i.e. keeping Collection's interface simple without it knowing about its users?

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • XNA 2D vehicle wall collisions

    - by mike
    I am attempting to implement collisions for my truck game, where the truck can drive around the world and hit walls surrounding the level and various randomly placed walls within the level. I am able to get direct collisions working correctly. However, it is getting very complicated and tricky very quickly. I am trying to accommodate various other collisions such as when a truck is against the wall then turns an adjacent direction or when they reverse into a wall. Both of these result in a slight collision as the image of the truck flips around to the direction the player wants to move. All of this has resulted in a whole lot of if statements to check how I should be fixing the collision. This in turn makes the player jump to random locations and "teleport" around corners, etc. The rest of my game is fine, I am not completely new to game development or C# for that matter. It's just the logic of collisions. Any ideas on how I can approach this? Image of the collisions I am trying to resolve: http://tinypic.com/r/2qtflvq/6

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >