Search Results

Search found 9451 results on 379 pages for 'big johnson'.

Page 87/379 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • SEO For Pagerank and Backlinks

    We know that Google is the big search engine over the world so Google is very important for webmasters. We must prepare our minds and we must learn SEO tactics for website ranking.

    Read the article

  • SEO Certification, Your Business, and Results

    The big question about SEO certification is does it do anything for your business? Small businesses and large businesses alike realize the importance of finding someone that is capable of either completing search engine optimization projects or offer SEO training to their staff.

    Read the article

  • How to Get a Top Google Rank - 3 SEO Secrets

    If you're one of the many marketers who are interested in getting a top Google rank, but don't know how the "big guns" are able to pull off consistent top ranks, then you should look at these 3 SEO secrets. Getting to the top of Google is actually pretty easy if you use the right methods and techniques - here's what you need to do.

    Read the article

  • The Reality of Internet Marketing and SEO - A Successful Formula?

    Tired of getting all the hassles of hiring an SEO expert and spend thousands of dollars just to make your website go up in terms of rankings on search engines? For sure, all of us who are into online business or any kind of business that involves with having a website do feel a little bit frustrated especially when big money is involved and the results are close to nowhere.

    Read the article

  • Oracle VM repository creation seems contradictory to its server pool?

    - by Michael
    I found something contradictory in Oracle VM. Clustered server pool creation in Oracle VM would format my FC LUN as ocfs2 , and start o2cb & ocfs2 services to build cluster environment. After that, when I wanted to create repository on the serverpool, unexpectedly, it told me that the physical disk I chose which is also my FC LUN, already contains a file system. What a contradictory! So what, delete the file system in serverpool? If so, why created it before?! OVM> list physicaldisk Command: list physicaldisk Status: Success Time: 2012-09-10 06:44:42.660 Data: id:0004fb00001800007765e62381895f61 name:OVM_HDS OVM> create serverpool clusterenable=true virtualip=10.84.21.123 physicaldisk=OVM_HDS name=ovmserverpool Serverpool creation took quite a long time since my FC LUN was big. When the creation completed, my FC LUN was created as ocfs2 and o2cb & ocfs2 services were started on my ovm servers successfully. But then repository creation indeed throws me a big surprise ... OVM> create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Command: create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Status: Failure Time: 2012-09-10 06:23:44.656 Error Msg: com.oracle.ovm.mgr.api.exception.RuleException: OVMRU_002026E Cannot use or delete physical disk: OVM_HDS, it already contains a file system: [Pool filesystem for ovmserverpool] Mon Sep 10 06:23:44 CST 2012 What should I do now? Delete the filesystem using dd command? That would destroy the serverpool, right? I'm really confused. My OVM Manager version is 3.1.1.399 which is the latest. Any tips are appreciated. Thanks.

    Read the article

  • alias gcc='gcc -fpermissive' or modifying ./configure script

    - by robo
    I am compiling quite big project from source. The compilation always ends with: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] I have already compiled this project one year ago. So I know a solution to this. Actualy I found more solutions: Adding a typecast to appropriate line of cpp code (It went to endless number of changes in each file. So I found next solution.) Modifying a makefile to compile that file with -fpermissive option. (I had to modify a lot of lines in each makefile. So I find even better solution.) "g++" or "gcc" was stored in a variable so I added -fpermissive to these variables. This is the best solution I have. It is sufficient to add this option to each makefile once. Unfortunately this software has big number of subdirectories. So I need to modify more than 100 makefiles. It took me whole day one year ago. Is there a way how to do this faster. What about this? alias gcc='gcc -fpermissive' I am not familiar with aliases. But it should be easy to try this. Is the syntax correct? And is this one correct? alias g++='g++ -fpermissive' ? And do I need to export the alias somehow? Will the make program respect the alias? Should I maybe change ./configure script? Or the ./configure.in? Or other file?

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Change DPI setting in Windows 8.1 for the Logon Screen

    - by jmc302005
    How can the DPI setting be changed for the Logon Screen in Windows 8.1? Microsoft has added per-user DPI settings. But this means that there is no adjustable DPI setting for the Lock/Logon screen. You can change the DPI setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However, it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen TV for my monitor on my desktop. Big enough for me to sit in my recliner and use my computer. I use the on-screen keyboard most of the time. (I don't want to keep a keyboard next to me.) The problem is that with the new DPI setup the on-screen keyboard takes up nearly half the screen, which is too big. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS\.DEFAULT\Control Panel\Desktop there is a string value named LogicalDPIOverride with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. How can I change this default DPI? Can I use the LogPixels string that worked for DPI in Windows 7? Here are two Screen shots, one of the Lock Screen and one of the Logon Screen:

    Read the article

  • How to configure amavisd-new for only scanning on particular senders/servers?

    - by mailq
    I'd like to know how to configure amavisd-new to only scan for Spam on particular clients (IPs, CIDRs or hostnames) or alternatively sender's email domain. I know that it is possible to do it on a recipient's mail address but not on how to do it for the sender's mail address. It is even possible to do it on a recipient's IP address with policy banks. But my approach should be to be independent of recipient and only relay on the sender. What I want to accomplish is to only scan mails originating from Yahoo, Google, Hotmail and the other big senders. So it is easier to configure which senders should be observed than the ones that shouldn't. I known that it is easier to achieve on the MTA side, but that is not part of the question because I already go a solution on the MTA side. I want to do it on amavisd-new. And it doesn't help to know how to put senders on a whitelist, as this still means that the mail goes through all the scanning but then gets a high negative score. The mail shouldn't be scanned at all unless sent by the big players. So which parameters in amavisd-new is the right one to enable scanning for particular senders and only for these?

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • Why have trackballs almost disappeared? [closed]

    - by Gary M. Mugford
    One of the movement sensors in my Microsoft Trackball Explorer has failed and right now I am using a mouse. Ugggh! I'll got steal one of the various Logitech trackballs spread around the house, but they all have issues. The Trackman has a horrible placement for the scroll wheel. Others have marbles for the thumb rather than a big ball for the fingers and at least one trackball is working around here without having a scroll wheel at all! (The one at the dinner table, for when I dine alone). My question is, why have trackballs fallen into disfavour? Seems to me that trackballs are great for crowded desktops (you know, the one with keyboards, notes, pens and coffee cups), and for laptops with those hated overly-sensitive touchpads. But right now, it seems to be a choice between that Logitech Trackman and some Kensington models that lack scroll wheels. All I want is a nice big ball to manipulate with the fingers and two buttons on the thumb side with the scroil wheel between them. Placement of other buttons is completely optional. Is that asking too much?

    Read the article

  • How to compress .pdfs in word 2007?

    - by chobo2
    Hi I am trying to send my cover letter and resume away but apparently it is too big to send through craigs list(my computer says the total size is 500kb) as it has a 600kb limit(so small should be at least a meg). Hi there. You recently tried to email Some job Email, an anonymous craigslist address. However, your message was too big to be sent through our system. Craigslist has a 600KB limit on the messages we'll send. Please reduce the size of your mail and try again. Thanks for using craigslist. So when I convert my word 2007(.docx) files to pdf they become huge. Like they got from 32kb to 320kb. So is there a way I can either get around craigslist limits or compress my pdfs a bit to make it happy. I don't want to send zips and stuff since the person who gets it might not even know what to do. I rather not send .docx since not sure if will have office 2007 or the compatibility view installed and I rather just send it as pdf(as some place require it anyways to be in pdfs). Thanks

    Read the article

  • Disk IO slow on ESXi, even slower on a VM (freeNAS + iSCSI)

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas 8.0.4). Those two machines are connected to each other with Gigabit ethernet. The raid-z volume is divided into three parts: two zvols, shared with iscsi, and one directly on top of zfs, shared with nfs and similar. I ssh'd into the freeNAS box, and did some testing on the disks. I used ddto test the third part of the disks (straight on top of ZFS). I copied a 4GB (2x the amount of RAM) block from /dev/zero to the disk, and the speed was 80MB/s. Other of the iSCSI shared zvols is a datastore for the ESXi. I did similar test with time dd .. there. Since the dd there did not give the speed, I divided the amount of data transfered by the time show by time. The result was around 30-40 MB/s. Thats about half of the speed from the freeNAS host! Then I tested the IO on a VM running on the same ESXi host. The VM was a light CentOS 6.0 machine, which was not really doing anything else at that time. There were no other VMs running on the server at the time, and the other two "parts" of the disk array were not used. A similar dd test gave me result of about 15-20 MB/s. That is again about half of the result on a lower level! Of course the is some overhead in raid-z - zfs - zvolume - iSCSI - VMFS - VM, but I don't expect it to be that big. I belive there must be something wrong in my system. I have heard about bad performance of freeNAS's iSCSI, is that it? I have not managed to get any other "big" SAN OS to run on the box (NexentaSTOR, openfiler). Can you see any obvious problems with my setup?

    Read the article

  • Kernel hacking methodology - how to find out where to hack the linux kernel

    - by Flavius
    I have a throw-away cheap laptop I'd like to twiddle around, a Thinkpad SL 500. What bothers me are two leds, the one for wireless connectivity, and the one for hibernation, which don't light up at all, although they're functional, I've tried it on windows. So I would like to write a kernel driver for them, nothing big, it just looks like a good idea to play around with the kernel. My question is what methodology should I follow systematically to find out what devices are responsible for those leds (in general, not necessarily specific to my hardware), and what drivers are responsible for the other two leds that work, bluetooth and the battery indicator? And when I say methodology, I really mean the methodology, step by step, with reasons for each step, like in the answer I've gave to someone else over here: What does && mean in void *p = &&abc; I am profficient at fgrepping through big code repositories, using static code analysers & co, but I think my lack of hardware knowledge hinders me on this problem. PS: I'm using ArchLinux, so almost the latest kernel version.

    Read the article

  • Outlook 2007 font sizes

    - by Flack
    Hello, Something really strange seems to have happened to my Outlook 2007. Everything was working fine for a long time now but at the end of today, all of a sudden, all of the fonts in Outlook are messed up. The font size of mails I write is huge (I am not zoomed in) and the font sizes of the buttons are big too, specifically the "Send", "To", and "Cc" buttons. I tried changing the font sizes through Outlook, but some of the buttons on the "Mail Format" tab in Options are not working, mainly the "Stationary and Fonts" button. I hit it but no window opens. This is all happening on my x64 machine. I took a look at my x32 machine, which also has Outlook 2007 installed and everything is ok there. Below is a link to an image comparing the broken, large font Outlook (top of picture) and the normal, working outlook. The text in the mails I compose is also abnormally large in the broken Outlook. Big font Outlook buttons. Any ideas? This came out of no where after a few months of no problems. Thanks.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >