Search Results

Search found 20388 results on 816 pages for 'nvidia current'.

Page 396/816 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • After upgrading to 12.04 from 10.10 my mythbuntu standard MCEUSB remote no longer works

    - by keepitsimpleengineer
    I had no problems using my Windows Media Center Remote with 10.10 Mythbuntu, but after upgrading, it no longer affects Mythbuntu. I have verified and re-installed it in Mythbuntu Control Centre. I have used irw to verify the ir buttons actions are properly received by the HTPC. How do I go about fixing this? 3.2.0-26-generic (#41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012) Xorg version: 1.11.3 (16 July 2012 08:06:31PM) GCC: 4.6 (x86_64-linux-gnu) Current updates as of 2012?07?21 $cat /etc/lirc/hardware.con #Chosen Remote Control REMOTE="Windows Media Center Transceivers/Remotes (all)" REMOTE_MODULES="lirc_dev mceusb" REMOTE_DRIVER="" REMOTE_DEVICE="/dev/lirc0" REMOTE_SOCKET="" REMOTE_LIRCD_CONF="mceusb/lircd.conf.mceusb" REMOTE_LIRCD_ARGS="" #Chosen IR Transmitter TRANSMITTER="None" TRANSMITTER_MODULES="" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" #Enable lircd START_LIRCD="true" #Don't start lircmd even if there seems to be a good config file #START_LIRCMD="false" #Try to load appropriate kernel modules LOAD_MODULES="true" # Default configuration files for your hardware if any LIRCMD_CONF="" #Forcing noninteractive reconfiguration #If lirc is to be reconfigured by an external application #that doesn't have a debconf frontend available, the noninteractive #frontend can be invoked and set to parse REMOTE and TRANSMITTER #It will then populate all other variables without any user input #If you would like to configure lirc via standard methods, be sure #to leave this set to "false" FORCE_NONINTERACTIVE_RECONFIGURATION="false" START_LIRCMD="" # lsusb | grep -i infrared Bus 003 Device 002: ID 0471:0815 Philips (or NXP) eHome Infrared Receiver

    Read the article

  • Can I monitor active user count on my iis sites?

    - by Dejan.S
    We are having problems with performance on our server that host our websites that the processor gets upp to 90%. I would like to monitor the amount of users active on your sites that are published on the iis. My question, is this possible? is there any software for this? EDIT current (like this second) visitor count on all the active websites on our iis REASON FOR THIS if i can get the visitor amount on the days the CPU is not overloaded and and compare it to the days it is then i atleast know that this CAN be a reason why this is happening and i can take it from there. Otherwise i can focus on the code on the sites, or maybe google crawler is causing this, there are manythings that can cause this you know? for me this is just a simple way of troubleshooting.

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • Lending Club Selects Oracle ERP Cloud Service

    - by Di Seghposs
    Another Oracle ERP Cloud Service customer turning to Oracle to help increase efficiencies and lower costs!! Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle Fusion Financials to help improve decision-making and workflow, implement robust reporting, and take advantage of the scalability and cost savings provided by the cloud. After an extensive search, Lending Club selected Oracle due to the breadth and depth of capabilities and ongoing innovation of Oracle ERP Cloud Service. Since their online lending platform is internally developed, they chose Oracle Fusion Financials in the cloud to easily integrate with current systems, keep IT resources focused on the organization’s own platform, and reap the benefits of lowered costs in the cloud. The automation, communication and collaboration features in Oracle ERP Cloud Service will help Lending Club achieve their efficiency goals through better workflow, as well as provide greater control over financial data. Lending Club is also implementing robust analytics and reporting to improve decision-making through embedded business intelligence. “Oracle Fusion Financials is clearly the industry leader, setting an entirely new level of insight and efficiencies for Lending Club,” said Carrie Dolan, CFO, Lending Club. “We are not only incredibly impressed with the best-of-breed capabilities and business value from our adoption of Oracle Fusion Financials, but also the commitment from Oracle to its partners, customers, and the ongoing promise of innovation to come.” Resources: Oracle ERP Cloud Service Video Oracle ERP Cloud Service Executive Strategy Brief Oracle Fusion Financials Quick Tour of Oracle Fusion Financials If you haven't heard about Oracle ERP Cloud Service, check it out today!

    Read the article

  • Inline Image in ASP.NET

    - by Ricardo Peres
    Inline images is a technique that, instead of referring to an external URL, includes all of the image’s content in the HTML itself, in the form of a Base64-encoded string. It avoids a second browser request, at the cost of making the HTML page slightly heavier and not using cache. Not all browsers support it, but current versions of IE, Firefox and Chrome do. In order to use inline images, you must write the img element’s src attribute like this: 1: <img src="data:image/gif;base64,R0lGODlhEAAOALMAAOazToeHh0tLS/7LZv/0jvb29t/f3//Ub/ 2: /ge8WSLf/rhf/3kdbW1mxsbP//mf///yH5BAAAAAAALAAAAAAQAA4AAARe8L1Ekyky67QZ1hLnjM5UUde0ECwLJoExKcpp 3: V0aCcGCmTIHEIUEqjgaORCMxIC6e0CcguWw6aFjsVMkkIr7g77ZKPJjPZqIyd7sJAgVGoEGv2xsBxqNgYPj/gAwXEQA7" 4: width="16" height="14" alt="embedded folder icon"/> The syntax is: data:[<mediatype>][;base64],<data> I developed a simple control that allows you to use inline images in your ASP.NET pages. Here it is: 1: public class InnerImage: Image 2: { 3: protected override void OnInit(EventArgs e) 4: { 5: String imagePath = this.Context.Server.MapPath(this.ImageUrl); 6: String extension = Path.GetExtension(imagePath).Substring(1); 7: Byte[] imageData = File.ReadAllBytes(imagePath); 8:  9: this.ImageUrl = String.Format("data:image/{0};base64,{1}", extension, Convert.ToBase64String(imageData)); 10:  11: base.OnInit(e); 12: } 13: } Simple, don’t you think?

    Read the article

  • Accessing host LVM partition from Windows XP through Virt.manager 0.8.5 / Qemu / KVM

    - by Nico de Smidt
    Hi, requested use case is having a Windows XP SP3 guest running in 64bit Ubuntu. (Linux pcs 2.6.35-22-server #35-Ubuntu SMP Sat Oct 16 22:02:33 UTC 2010 x86_64 GNU/Linux) I want this guest to access an LVM LV on the Ubuntu disk. I've setup the following LVM config: --- Logical volume --- LV Name /dev/storage/sdc1 VG Name storage LV UUID Zg5IMC-OlqB-prL5-fgg4-3A9A-OgKP-oZ0QkJ LV Write Access read/write LV Status available # open 0 LV Size 1.01 GiB Current LE 259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:3 -- 1) I've setup a storage pool for /dev/storage 2) I've mkfs.vfat /dev/storage/sdc1 3) and made a virtual IDE disk in the virt-manager setup for the guest. Target device: IDE Disk 2 Source path: /dev/storage/sdc1 -- Now when running XP (guest) Windows sees a new disk in Disk Manager and want's to install a partition on it, since it believes the drive is empty. After formatting from within Windows I can put data on the new disk volume. -- Back in Ubuntu however I cannot access this this any more since it created a partition within an LVM Logical Volume. Running fdisk -l shows the following: root@pcs:/media# fdisk -l /dev/storage/sdc1 Disk /dev/storage/sdc1: 1086 MB, 1086324736 bytes 32 heads, 63 sectors/track, 1052 cylinders Units = cylinders of 2016 * 512 = 1032192 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d72e4f4 Device Boot Start End Blocks Id System /dev/storage/sdc1p1 1 1050 1058368+ c W95 FAT32 (LBA) -- which seems fine to me, but when trying to mount /dev/storage/sdc1p1 I get the following error: mount /dev/storage/sdc1p1 /media/xp mount: special device /dev/storage/sdc1p1 does not exist which makes sense since in lvdisplay sdc1p1 does not exist Main question: I want to mount the vfat partition in both Ubuntu and XP What am I missing here????? regards, and thanks for your consideration. Nico

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • Beta Soon Closing: Java SE 7 Programmer I (OCA) Exam

    - by Harold Green
    Just a reminder that you still have the next several weeks to take the beta exam for the new "Oracle Certified Associate, Java SE 7 Programmer" certification. From now through December 16th, you can take the "Java SE 7 Programmer I" exam (1Z1-803) for only $50 USD. Not only that, but because this only a single-exam certification - passing it puts you among the very first certified on the new Java SE 7 platform! You'll be happy to note that we worked hard to raise the bar for OCA as we built the Java SE 7 certification. The content that we considered to be more ‘conceptual knowledge-based' has been eliminated in the OCA level and has been replaced with far more practical content - what we often call "practitioner-level" concepts and questions. In fact, some of the topics that we previously covered at the Oracle Certified Professional (OCP) level is now covered at the OCA level. Doing this not only increases the value of the Java SE 7 OCA certification, but also has provided the opportunity for us to broaden the topics, concepts, questions covered at the OCP certification level. All of this adds up to more value and credibility to those who get certified on Java SE 7. The OCA exam doesn’t have prerequisites. But it is very important that you carefully review the test objectives on the exam page and assess your current skills and knowledge against that list to be sure that you're ready. From the exam page you can register to take the exam at a Pearson VUE testing center near you.Below are some helpful details on the certification track and exam. Again, register now - just a few weeks left at the special low beta price! QUICK LINKS: Certification Track: Oracle Certified Associate (OCA), Java SE 7 Programmer Certification Exam: Java SE 7 Programmer I (1Z1-803) Video: Coming Soon - Java SE 7 Certification Info: About Beta Exams Exam Registration: Instructions | Register Here

    Read the article

  • Getting live traffic/visitor analytics when using a reverse proxy

    - by jotto
    I'm in process of implementing Varnish as a reverse proxy for a Ruby on Rails app and I'm using Google Analytics (JS/client side script to record visitor data) but it's several hours delayed so its useless for knowing what's going on now. I need at a glance live data that includes referring traffic and what current req/sec is. Right now I am using a simple Rack middleware application to do the live stats (gist.github.com/235745) but if the majority of traffic hits Varnish, Rack will never be hit so this won't work. The closest solution I've found so far is http://www.reinvigorate.net/ but it's in beta (there are also no implementation details on their front page). Does Varnish have traffic logs that I can custom format to match my Apache logs so I can combine them, or will I have to roll my own JS implementation like GA that shows the data in real time?

    Read the article

  • Custom prompt doesn't work on Mac's terminal

    - by mareks
    I like to use a custom prompt (current path in blue) on my unix machine: export PS1='\[\e[0;34m\]\w \$\[\e[m\] ' But when I try to use it on Mac's terminal it doesn't work: it fails to detect the end of the prompt and overwrites the prompt when I type commands. This also happens when I'm inputting a long command where it wraps over the same line instead of starting a new line. I don't understand why this is the case since I use bash on both machines. Any suggestions on how to remedy this?

    Read the article

  • nano syntax highlighting not working for all languages

    - by Dejan
    I have a funny situation where I am unable to add custom highlighting definitions to my nano text editor. The funny thing is that the predefined work like a charm and can be edited. But I have created a new one for js with $ sudo touch js.nanorc $ sudo nano js.nanorc my current js.nanorc looks like this: syntax "JavaScript" "\.js$" color blue "\<[-+]?([1-9][0-9]*|0[0-7]*|0x[0-9a-fA-F]+)([uU][lL]?|[lL][uU]?)?\>" color blue "\<[-+]?([0-9]+\.[0-9]*|[0-9]*\.[0-9]+)([EePp][+-]?[0-9]+)?[fFlL]?" color blue "\<[-+]?([0-9]+[EePp][+-]?[0-9]+)[fFlL]?" color brightblue "[A-Za-z_][A-Za-z0-9_]*[[:space:]]*[(]" color black "[(]" color cyan "\<(break|case|catch|continue|default|delete|do|else|finally)\>" color cyan "\<(for|function|get|if|in|instanceof|new|return|set|switch)\>" color cyan "\<(switch|this|throw|try|typeof|var|void|while|with)\>" color cyan "\<(null|undefined|NaN)\>" color brightcyan "\<(true|false)\>" color green "\<(Array|Boolean|Date|Enumerator|Error|Function|Math)\>" color green "\<(Number|Object|RegExp|String)\>" color red "[-+/*=<>!~%?:&|]" color magenta "/[^*]([^/]|(\\/))*[^\\]/[gim]*" color yellow ""(\\.|[^"])*"|'(\\.|[^'])*'" color magenta "\\[0-7][0-7]?[0-7]?|\\x[0-9a-fA-F]+|\\[bfnrt'"\?\\]" color brightblack "(^|[[:space:]])//.*" color brightblack start="/\*" end="\*/" color brightwhite,cyan "TODO:?" color ,green "[[:space:]]+$" color ,red " +" If anyone can see the problem then please tel me

    Read the article

  • What will happen with my RAID5 after motherboard change?

    - by abatishchev
    Currently I have ASUS P5Q-EM and 3 HDD in RAID5 using it's on-board RAID controller Intel ICH10R. I want to bye a new motherboard, for example, Gigabyte GA-EQ45M-S2 which also have on-board RAID controller, but Intel ICH10DO. What will happen with my data on RAID5? Will I have to re-create the array from the scratch and lost all my data? Is such array a soft RAID or soft-hard? What if my current motherboard will broken? What will happen with my data?

    Read the article

  • mediatomb fails with "respawning too fast, stopped"

    - by felix
    When I try to start mediatomb it fails. I see this in dmesg [...] [916349.374331] init: mediatomb main process ended, respawning [916349.394462] init: mediatomb main process (880) terminated with status 1 [916349.394512] init: mediatomb main process ended, respawning [916349.414598] init: mediatomb main process (882) terminated with status 1 [916349.414647] init: mediatomb respawning too fast, stopped My current /etc/init/mediatomb.conf looks like this. description "MediaTomb UPnP media server" author "Daniel van Vugt <vanvugt in launchpad>" start on (local-filesystems and net-device-up IFACE!=lo) stop on runlevel [!2345] respawn env CONFIGXML=/etc/mediatomb/config.xml env LOGFILE=/var/log/mediatomb.log env DEFAULT=/etc/default/mediatomb script [ -r $DEFAULT ] && . $DEFAULT [ ! $USER ] && USER=root [ ! $GROUP ] && GROUP=$USER if [ -n "$INTERFACE" ]; then INTERFACE_ARG="-e $INTERFACE" $ROUTE_ADD $INTERFACE fi exec mediatomb \ -c $CONFIGXML \ -u $USER \ -g $GROUP \ -l $LOGFILE \ $INTERFACE_ARG \ $OPTIONS end script post-stop script [ -r $DEFAULT ] && . $DEFAULT if [ -n "$INTERFACE" ]; then $ROUTE_DEL $INTERFACE fi end script

    Read the article

  • Business Analyst role in development process

    - by Ryan
    I work as a business analyst and I currently oversee much of the development efforts of an internal project. I'm responsible for the requirements, specs, and overall testing. I work closely with the developers (onshore and offshore). The offshore team produces all of the reports. Version 1.0 had a 9 month development cycle and I had about 4-5 months to test all the reports. There was the usual back and forth to get the implementation right. Version 2.0 had a much shorter development cycle (3 months). I received the first version of the reports about 3 weeks ago and noticed a lot of things wrong with it. Many of the requirements were wrong and the performance of the queries was horrendous at 5x - 6x longer than it should have been. The onshore lead developer was out and did not supervise the offshore development team in generating the reports. Without consulting management, I took a look at the SQL in the reports and was able to improve performance greatly (by a factor of 6x) which is acceptable for this version. I sent the updated queries as guidelines to the offshore team and told them they should look at doing X instead of Y to improve performance and also to fix some specific logic issues. I then spoke to my managers about this because it doesn't feel right that I was developing SQL queries, but given our time crunch I saw no other way. We were able to fix the issue quite fast which I'm happy with. Current situation: the onshore managers aren't too pleased that the offshore team did not code for performance. I know there are some things I could have done better throughout this process and I do not in any way consider myself a programmer. My question is, if an offshore team that works apart from the onshore project resources fails to deliver an acceptable release, is it appropriate to clean up their work to meet a deadline? What kind of problems could this create in the future?

    Read the article

  • Semi Fixed-timestep ported to javascript

    - by abernier
    In Gaffer's "Fix Your Timestep!" article, the author explains how to free your physics' loop from the paint one. Here is the final code, written in C: double t = 0.0; const double dt = 0.01; double currentTime = hires_time_in_seconds(); double accumulator = 0.0; State previous; State current; while ( !quit ) { double newTime = time(); double frameTime = newTime - currentTime; if ( frameTime > 0.25 ) frameTime = 0.25; // note: max frame time to avoid spiral of death currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { previousState = currentState; integrate( currentState, t, dt ); t += dt; accumulator -= dt; } const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); } I'm trying to implement this in JavaScript but I'm quite confused about the second while loop... Here is what I have for now (simplified): ... (function animLoop(){ ... while (accumulator >= dt) { // While? In a requestAnimation loop? Maybe if? ... } ... // render requestAnimationFrame(animLoop); // stand for the 1st while loop [OK] }()) As you can see, I'm not sure about the while loop inside the requestAnimation one... I thought replacing it with a if but I'm not sure it will be equivalent... Maybe some can help me.

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • Unix users and permissions and how they interact with web files.

    - by Columbo
    Hello, When you issue the command ls in Linux you get this sort of thing: drwxr--r-- 1 fred editors 4096 drafts -rw-r--r-- 1 fred editors 30405 file1.php -r-xr-xr-x 1 fred fred 8460 file2.php I know that the rwxrwxrwx are the read, write and execute permissions for the current user. And I think I know that 'fred' is the user who owns the file. So I assume fred can write to file1 but no one else can. But what is the extra bit 'editors' and what is the difference between file1 and file2 with respect to one having an ownership of 'fred editors' and the other 'fred fred'? Also if a web user connects to one of the files, what is their user name and where is this decided? If the server decided that user connecting from the web was going to be fred, does this mean any web user could write to file1? Any information welcomed, I am resaerching this but just getting confused. Thanks

    Read the article

  • How to extend Wi-Fi signal across rooms?

    - by Sriram Krishnan
    I moved into a new place and my old Wi-Fi router just doesn't have the range to get into all the rooms. I've been investigating a lot of options and I'm wondering what other folks have done here. Moving the current location of my primary Wi-Fi router is not an option thanks to our cable provider and our landlord. Buy a bigger, beefier router (seems expensive). If so, should I go for one of those draft 802.11n ones to avoid microwave/other Wi-Fi router interference? Set up a router with DD-WRT as a repeater Leech the neighbors' open Wi-Fi access point. Alright, I was kidding about the last one but I'm genuinely curious as to what my best option is.

    Read the article

  • Animation cycles in 3ds max 2010

    - by user22902
    Hello, I'm new to animation. I have Max 2010. Basically I have a ball and it has keyframes where it bounces and moves 15 units forward I want a way to be able to add this animation as many times as I want so it bounces and always moves 15 more units. When I shift drag my keyframes, the ball moves 15 units, then quickly goes back and moves the same 15 units. I want it to be like a bip file where it can be added relative to the current position. (Essentially creating a generic ball bounce). Thanks

    Read the article

  • Virtual Box ie7 won't connect to the internet

    - by j-man86
    I use IE7 & IE8 disk images from microsoft and Virtual Box to test my websites in IE. One day I stopped being able to connect to the internet with virtual box, seemingly after trying to alter proxy related settings to view Hulu.com outside of the U.S... I don't know whether it's coincidence or not. Anyway, I am using the 'PCnet-FAST III (NAT)' adapter with virtual box. Can you guys help me troubleshoot this issue? What information do you need on my current settings? Thanks so much!

    Read the article

  • Very long DNS lookups inside my network

    - by Nuno Cordeiro
    Ever since I installed DD-WRT (v24-sp2 08/07/10 std-usb-ftp) on my router (RT-N16), my browsing got substantially slower. Using FirePHP I figured out that it's being caused by VERY long DNS lookups (~30 seconds). When the domain name was very recently accessed then speed is very good. I tried changing DNS on the computer and I tried messing around with the options on DD-WRT. I have tried to configure the router with Google DNS and/or OpenDNS. My current DNS output after using ipconfig -all is: 192.168.1.1 208.67.220.220 8.8.8.8 208.67.222.222 Can someone help me debug and solve this problem? I'd like to snoop the requests themselves. How can I know which DNS requests are being sent and which are failing/succeeding? Note: I don't expect this to be relevant but my router is connected to the internet through an ONT.

    Read the article

  • SEO Keyword Research Help

    - by James
    Hi Everyone, I'm new at SEO and keyword research. I am using Market Samurai as my research tool, and I was wondering if I could ask for your help to identify the best key word to target for my niche. I do plan on incorporating all of them into my site, but I wanted to start with one. If you could give me your input on these keywords, I would appreciate it. This is all new to me :) I'm too new to post pictures, but here are my keywords (Searches, SEO Traffic, and SEO Value / Day): Searches | SEO Traffic | PBR | SEO Value | Average PR/Backlinks of Current Top 10 1: 730 | 307 | 20% | 2311.33 | 1.9 / 7k-60k 2: 325 | 137 | 24% | 822.94 | 2.3 / 7k-60k 3: 398 | 167 | 82% | 589.79 | 1.6 / 7k-60k I'm wondering if the PBR (Phrase-to-broad) value of #1 is too low. It seems like the best value because the SEOV is crazy high. That is like $70k a month. #3 has the highest PBR, but also the lowest SEOV. #2 doesn't seem worth it because of the PR competetion. Might be a little too hard to get into the top page of Google. I'm wondering which keywords to target, and if I should be looking at any other metric to see if this is a profitable niche to jump into. Thanks.

    Read the article

  • How to configure custom error page in Plesk 9.3 for non existing folder?

    - by Junior Mayhé
    I'm trying to configure Plesk in order to show website visitors a custom error html. The current hosted site is an ASP.NET site. This site shows its custom errors on error403.aspx and error404.aspx files. Now to comply with plesk, I've created error_docs with required files like forbidden.html, etc... When user try to navigate http://mysite.com/a_missing_page.aspx, the visitor is redirected to error404.aspx correctly. But when user try to navigate to a non existent directory http://mysite.com/a_missing_folder/ the site takes me to IIS 404 regular page. Plesk has Custom error documents activated on Web hosting settings. ASP.NET Error pages defined in web.config are showing fine. But it seems plesk wont show its custom html error documents. The bottom line here is about setting up a custom error page to a directory. Is it possible to do this using Plesk or do I have to change it manually on IIS?

    Read the article

  • Accidentally replaced the partition table using GParted in UBUNTU

    - by claws
    Hello, This machine has UBUNTU & wINDOWS XP. I'm currently logged into UBUNTU. I was just checking the features of GParted and accidentally clicked Device > Create Partition Table. A default MS-DOS partition table is created. Now if I re-start the Gparted there is nothing. Its showing entire disk as UNALLOCATED space. Lucky thing is All the drives (C:, D:, E:) are currently mounted and I'm in UBUNTU. I guess its possible to re-create the partition table using current status. But I don't know how? Can any one kindly tell me how to do this. This is a lab computer. If its not recoverable. I'm completely screwed!!

    Read the article

  • I can't upgrade to 12.10 Quantal Quetzal

    - by JamesTheAwesomeDude
    After hearing about some of the features in 12.10, I decided to upgrade my Ubuntu from 12.04 LTS to the newer 12.10. However, when I try to upgrade it, I start the update manager, like always, and I start the distro upgrade, just like I do every time there's a new version, but at the end of the "Setting new software channels" stage, it fails, saying: Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: E: Unable to correct problems, you have held broken packages. This can be caused by: Upgrading to a pre-release version of Ubuntu Running the current pre-release version of Ubuntu Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. What should I do? I understand that I have some sort of "broken" package on my system, but how do I identify and remove the problem? Note: At the start of the "Setting new software channels" stage, I was presented with this: Third party sources disabled Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >