Search Results

Search found 3392 results on 136 pages for 'average joe'.

Page 30/136 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Stop Windows Media Player from connecting to Internet/MS using hosts file or alternate method?

    - by Joe
    Is there a way to prevent Windows Media Player from connecting to the internet and MS using the hosts file or other methods? Edit: (Nov 20 2009 at 19:16) I have both VLC and MPC and I do use them. However I am currently using WMP to organize all my music and I hate that WMP is always making outgoing connections. I just tried TCPView and can't believe how many connections WMP makes when you first launch it. I have even disabled everything in its options that relates to connecting to internet. Could any of you recommend a good media player thats also good for organizing your music library like WMP, and doesnt connect to the internet? Preferably one that a WMP user would actually like as much as WMP. The reason I use WMP is because I like its interface, the way its setup and how it looks.

    Read the article

  • Access device with local ip over internet

    - by Joe Perrin
    I apologize up front if this is the wrong place to post this question. It seemed like the best fit. I have a device which is connected to my local network which has an IP of 192.168.1.10 from my router. Additionally I use a Windows 7 machine that runs some software called DirectUpdate which allows me to resolve the local IP of the Windows 7 machine (192.168.1.5) to be accessible to the internet via my domain (example.com) - Basic dynamic DNS updating. I'd like to access the device from example.com. I am unsure how to do this as I don't have any way to install DirectUpdate (or any software) on the device to make the device available to the internet. Any insight here would be appreciated. Thank you.

    Read the article

  • Heavy write to Galera cluster - table locked, cluster practically unusable

    - by Joe
    I set up Galera Cluster on 3 nodes. It works perfectly for reading data. I have done simple application to make some test on the cluster. Unfortunately I have to say that the Cluster fails totally when I try to do some writing. Maybe it can be configured differently or I do sth wrong? I have a simple stored procedure: CREATE PROCEDURE testproc(IN p_idWorker INTEGER) BEGIN DECLARE t_id INT DEFAULT -1; DECLARE t_counter INT ; UPDATE test SET idWorker = p_idWorker WHERE counter = 0 AND idWorker IS NULL limit 1; SELECT id FROM test WHERE idWorker = p_idWorker LIMIT 1 INTO t_id; SELECT ABS(MAX(counter)/MIN(counter)) FROM TEST INTO t_counter; SELECT COUNT(*) FROM test WHERE counter = 0 INTO t_counter; IF t_id >= 0 THEN UPDATE test SET counter = counter + 1 WHERE id = t_id; UPDATE test SET idWorker = NULL WHERE id = t_id; SELECT t_counter AS res; ELSE SELECT 'end' AS res; END IF; END $$ Now my simple C# application creates for example 3 MySQL clients in separate threads and each one executes the procedure every 100ms until there is no record where column 'counter' = 0. Unfortunately - after about 10 seconds sth is going bad. On servers there is process 'query_end' that never ends. After that - you cannot make update on the test table, MySQL returns: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction . You cant even restart mysql. What you can do is to restart server, sometimes whole cluster. Is Galera Cluster so unreliable when you do massive concucurrent writing/updates? Hard to believe.

    Read the article

  • remove start.funmoods search from chrome

    - by Joe King
    I post this with much trepidation after my baptism by fire recently, and knowing that this question has been asked and answered already. My problem is that I cannot seem to remove start.funmoods as the default search engine when I type into the omnibox in Chrome - I have followed the instruction in the answer to the previous question on this topic. In particular: I deleted funmods using the control panel - add/remove programs Under wrench-tools-extensions funmods is not mentioned Under wrench-settings-manage search engines, there is nothing listed at all. Restarted chrome and rebooting have not helped.

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • using Team foundation server

    - by joe
    I am using Team foundation server / visual studio as a client, to manage my visual foxpro source code. everytime i "checkout for edit" all my folders and files are read only and when i open my projects, it is like if the project is opened for the first time, asking me if i would like to set the current path as the default path... this is causing other serious problems as bad reference to "missing library files", which are actually not missing. i know Visual foxpro is old stuff...i hope someone who went through the same errors could help me out. thanks a lot guys

    Read the article

  • How do I set up Grub properly to quad-boot Windows, Mac OS X, Linux, and FreeBSD?

    - by Joe
    Grub has gone completely insane on me. My quad-boot system was working great up until I upgraded Ubuntu to 12.04. Since Ubuntu overwrote the Grub stuff I had to repair it with my Mac OS X and FreeBSD entries. After this, trying to boot Mac OS X gave me the error "couldn't open file" and FreeBSD gave the error "no such partition". Windows and Ubuntu worked fine. So I tried repairing again because I figured something must've gone wrong in the install process. Then only Ubuntu would boot. Trying to boot Windows would give me the error "no argument specified". I tried repairing Grub once again, since I seemed to be getting different results each time. This time, Ubuntu no longer appeared in the Grub menu, and the errors for the other OSes were the same. So I booted into the Ubuntu 12.04 live CD and ran Boot-Repair with recommended settings. Now Grub is completely skipped and Windows boots up. I have absolutely no idea what is going on or why I get different results every time I reinstall Grub. Here is how my partitions are set up: sda1 - Storage drive, sdb1 - Windows, sdb2 - Mac OS X, sdb3 - FreeBSD, sdb4 - Extended, sdb5 - Ubuntu, sdb6 - Shared storage, sdb7 - Shared Storage, Here's my grub.cfg file: grub.cfg

    Read the article

  • LAN Webserver not accessible through PPTP VPN

    - by Joe
    I have this LAN Network with 10 clients and one server. The server has 4 virtual machines and a BIND DNS Server. When the router assigns an IP through the DHCP , it also gives the ip of the DNS Server, to resolve internal domains. Everything apparently works fine, the clients being able to access the server's vm's resources, but I also have to create the possibility of remote access. I installed the PPTP VPN on the server, and the vpn clients would get the same ip address range as the router's dhcp is assigning. Apparently everything is fine here also, except the fact that when we connect through the vpn , we cannot access the webserver on port 80 ( the webserver being one of the server's VM ). The iptables on the webserver has been turned off for testing purposes and the router's firewall is directing all the external traffic to the server. Can somebody suggest a solution to this? Extra details : VPN Server : PPTP Server Centos 6.3 x64 VPN Client : Windows 7 default PPTP VPN Connection The client is successfully connected to the server, everything works ( FTP/MYSQL/SSH/DNS ) , except the fact that when I try to access the webserver IP on the browser, it won't work.Pinging it works perfectly.

    Read the article

  • Looking for a user friendly file upload service to run on my own server

    - by Joe Mako
    My goal is to allow people to upload and download large files through their web browser with a simple user interface and user password/account management. Yousendit offers a service like what I am looking for, but it does not make sense to use their service if I already have a web server. Is there an open source version of Yousendit that I can run on my own web server? What is this type of software or service called? (calling it an FTP Server does not seem to fit)

    Read the article

  • Ranking tables from Excel data

    - by Joe
    Hi all (asking here because this meta question told me to). I have some data in an excel spreadsheet here. It's no more than a table with about five columns. Year Purchased Manufacturer Model Num Unit Price Total Price 2007 SMARTBOX FuturePad XP 1 £2,915.00 £2,915.00 2007 Attainment Company Inc Go Talk 9+ 1 £104.00 £104.00 2007 Attainment Company Inc Go Talk 20+ 1 £114.00 £114.00 I'd like to be able to build a 'top ten' of either manufacturers or models (and I'd like to be able to do it by either most mentioned, most sales, or highest value of sales) - but I've got no idea what the best method is in excel. Any suggestions...? The ideal output might be a set of sells that says something like Company Units A 5342 B 232 C 2 D 1

    Read the article

  • Email Mail Merge via linked Excel sheet

    - by Joe Perrin
    I have a MS Word 2007 document setup as a Mail Merge doc. I am using Excel as the data source. The MERGEFIELD ClientData contains an Excel file (test.xlsx). I want to merge the data from the Excel file listed in ClientData into the respective Mail Merge document. However, whenever I start the Mail Merge the {MERGEFIELD ClientData} field gets resolved only once and does not select the next row from ClientData. So this: {LINK Excel.Sheet.12 "C:\\path\\to\\file\\{MERGEFIELD ClientData}" \a \f 4 \h} Becomes this after starting the merge: {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test.xlsx" \a \f 4 \h} So every Mail Merge doc uses the test.xlsx instead of the respective Excel document specific to the client (i.e test1.xlsx, test2.xlsx, test3.xlsx, etc.) As the merge runs through each Mail Merge doc I expect to see this: {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test.xlsx" \a \f 4 \h} {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test1.xlsx" \a \f 4 \h} {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test2.xlsx" \a \f 4 \h} {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test3.xlsx" \a \f 4 \h} But for some reason this isn't happening. Does anyone have any suggestions? Thanks!

    Read the article

  • phpmyadmin error #2002 cannot connect to mysql server

    - by Joe
    so i am getting this error when trying to connect to my mysql server. i have reinstalled MYSQL and php several times and tried a slew of command line work from info around the web.mysql is running and i know that my mysql.sock exists and is located in ~/private/tmp/ and also in ~/tmp/. i also have plenty of hard drive space. i have installed and setup phpmyadmin correctly only adding a password to 'Password for config auth'. AND i have connected to the server via Sequel Pro. so my question is what the heck is going on that i can't connect to the server via phpmyadmin? any guesses? also i'm on a 64-bit intel mac running snow leopard

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Remotely start VNC server on computer with no Admin password

    - by Joe M.
    I'm trying to remotely access a particular computer of mine and it seems that VNC has stopped. I can tell that the computer is still running because I can VNC into another machine on the same network and can see my target machine under the Network section in Windows Explorer and can also ping it succesfully. To summarize: I own the target computer I am currently too far to physically access it Remote Desktop Connection feature of Windows is not enabled The computer normally runs a VNC server, but it seems to have stopped The computer is definitely on and connected to the network The computer has no password on the Admin account I can VNC into other computers on the same LAN Given these conditions how can I get into the target to open VNC server, or even just reboot the target (VNC should open on startup)? I have tried PsExec and get "access is denied", and also tried "Connect to another computer.." from the Computer Management console and also get "access is denied".

    Read the article

  • How can I get rsync to ignore missing files?

    - by Joe Casadonte
    I'm executing a command like the following to several different systems: $ rsync -a -v [email protected]:'/path/to/first/*.log path/to/second.txt' /dest/folder/0007/. Sometimes *.log does not exist, and that's OK, but rsync generates the following error: receiving file list ... rsync: link_stat "/path/to/first/*.log" failed: No such file or directory (2) done Is there any way to suppress that? The only way I can think of is to use include and exclude filters, which just seem a PITA to me. Thanks!

    Read the article

  • Wildcard subdomain setup ... want to change host IP throws off client A records... what to do...

    - by Joe
    Here is the current set up (in a nutshell). The site is set up with a wildcard subdomain, so *.website.com is accessible. Clients can then domain map their own domains with an A record to the server IP address and it will translate the to appropriate *.website.com with re directions and env variables in htaccess. Everything is working perfect... but now comes the problem. The site has grown larger than a single DQC Xeon server can handle at peak times. Looking at cloud options seems tempting, but clients are pointing their domains to a single IP address with the A record (our server). Now, this was probably bad planing from the start, but the question is, if this was to be done today, how would we set it up so that clients use a CNAME perhaps to point their domains to our server rather than an A record. And, if that is not possible for the root domain, how can we then use multiple IP addresses on our side to translate the incoming http request? Complex enough? Hope I've explained it well!

    Read the article

  • Is there a way for one SSH config file to include another one?

    - by Joe Casadonte
    In case it matters: OS: Ubuntu 10.04 SSH: OpenSSH_5.3p1 Debian-3ubuntu5 I'd like one SSH config file to include another one. The use case would be to define whatever I want in my default .ssh/config file and then pre-pend a couple of extra things in a separate file (e.g. ~/.ssh/foo.config). I want the second file to incorporate the first one, though, so I don't have to duplicate everything in the first one. Is that doable? Thanks!

    Read the article

  • Fake demostration software for command line

    - by Joe
    I'm looking for some software that would be useful for giving demonstrations. I regularly have to show the effects of scrips ect to classes while talking about their effects, and equaly regularly I have finger trouble and have to rewrite various commands - wasting class time and general energy. I'd like to be able to record a sequence of commands in advance, and then play them back at the speed of my choosing. So I might have a file that containes the commands: echo "hello world!" ls ls -l ls -l | sort I'd like to be able to play these commands back by typing similar ones in. So I'd have a blinking command prompt and if I typed 'echo "hxxx' the command prompt would read home$echo "hell and if I typed any other letters the terminal would fill up with the remainder of the command until I press enter, when it executes the command. The point is that even if I screw up the command when typing it, the command that I'd prepared in advance would be executed. My question is - does similar software exist for giving demonstrations? or even, is this an easy thing to script up...? EDIT - two quick things first of all I'm on osx - but it would be nice to get a general solution for other people who arrive here from google. and second a lot of the comments/answers are concentrating on, in effect, making it fast and easy to enter long commands by means of hotkeys and the like. Actually I'd like it to at least look like I'm typing live - that's why I put in the bit about the one-to-one keymapping, but I don't think I explained that quite as well as I could have...

    Read the article

  • In OpenOffice Spreadsheet, how can I set the default Date format?

    - by Joe Casadonte
    I'm using OO 3.1.1 on Ubuntu 9.10 (in case that matters to the answer). I like my dates to appear as YYYY-MM-DD. I can't think of a time when I want to see a date in any other format, so I'm constantly changing how dates look. That's manageable, though annoying. What's gotten me to the point of posting is that when I edit a cell with a date value, I have to edit it in the format MM/DD/YYYY, which is really, really annoying, as I'm usually mucking with the day (or possibly the month), and very seldom the year. So there's lots of cursor or mouse use, wasting my time. So is there a way that I can change how dates are edited, or at least the default display format? Thanks!

    Read the article

  • Is there a find and replace application with multiple rules?

    - by Joe
    I'm looking for a search and replace tool that takes a list of rules and either a file (or directory) and runs those rules on the files. I have a bunch of different things that need to be changed in many files, and I'd like to be able to run all the rules in one shot. I was hoping I could put the rules separate file, and use the same set of rules over and over again. Does such a tool exist?

    Read the article

  • Fedora 17 Hangup on Boot (plymouth-quit-wait)

    - by Joe
    I am having an issue after several updates where when I try to boot the boot animation "loads" then flashes with the fedora logo, and then sits there. After checking into it more I found that two services were failing to start. The first is tcsd.service and the second is plymouth-quit-wait.service. I was able to to disable tcsd.service (in the hopes I could boot without it), however I have been unable to do anything to the second service. I am running FC17 and the akmod nvidia drivers, on an ASUSG53SW. Everything is up to date as far as I know. What is the exact problem that I am facing and how can I go about troubleshooting this or fixing it?

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >