Search Results

Search found 13891 results on 556 pages for 'maybe homework'.

Page 411/556 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • esxi 5 monitoring

    - by user134880
    I'm new to esxi/vmware world. Have few question about monitoring esxi host. Now I'm using trial versian esxi 5.1 I can see performance charts in vShpere client. Cool. But I want to export this raw data (raw - I mean cvs,txt or some format which I can parse later) to other server to be able to parse this data later and create custom charts. (please do not advise to try vCenter, I need custom charts etc.) I could run esxtop in batch mode and use this data... But... How does vShpere client performance charts work? Where client takes data for charts? So if I will use esxtop batch mode it will add extra load to server. Is it possible to use same source as vSphere client use for charts? There is /var/lib/vmware/hostd/stats/hostAgentStats-20.stats file. Its looks like it is binary format. As I understood it is exactly data which I need?!?! Any ideas how to parse it? Thanks! PS: maybe some one know where to find, if there is one, info about processes running on esxi host?

    Read the article

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

  • How to turn off Excel "Header Row" without losing data in it?

    - by Ken
    I've been sent an Excel spreadsheet with a weird first row. Some of the cells say "Column1", "Column2", etc., but I can't delete their contents. If I select the cell and hit backspace, it goes blank, but when I press return, it goes right back to saying "Column1". I found another answer here that suggested this could be caused by "Cell validation", but the validation window says "Any value", and also "show alert" (and I'm not seeing an alert), so I don't think that's it. The first row is white text on a blue background, if that means anything. The spreadsheet was sent to me in XLSX format, but I tried resaving as XLS and opening that, and it seems to make no difference. This is with the "ribbon" version of Excel (they got rid of the Help menu so I don't know how to see what version number it is!). Thanks! Update: The Excel online help says to use ribbon Home tab - Cells - Delete - ... to delete cells. When I select anything on the first row, this pop-up menu is dimmed. So maybe Excel doesn't think row 1 consists of "cells"? Though I don't know what else it would call them. Update 2: I found it, kind of. If I click the "Design" tab in the ribbon, then uncheck "Header Row", then first row becomes a normal row of cells again. Unfortunately, the contents disappear entirely. I want to delete a few cells, not all 50+! And if I copy the first row before turning off "Header Row", it disappears from the clipboard when I uncheck that. So I kind of know what mode it's stuck in, but not a good way out of it.

    Read the article

  • How to connect a Win XP PC and Win 7 PC with a Linksys WRT54G Router to a PPPOE connection ?

    - by ChristianM
    I have two PCs and a WRT56G Router. My provider is a PPPOE connection (username and password). I can connect my Windows 7 easily. Even if I choose Auto or PPPOE on the router's configuration. Maybe because this one had the connection in the first place with the username and password set up. But I can't connect the Windows XP PC. Actually it does connect in a way, but it receives and sents just a few packages. I can actually see in My Network Places how many packages does the Gateway make and it's a big difference. And it won't open any page. How should I set up the router so I can give enough bandwith to the Windows XP PC ? I've seen another problem that I forgot to mention. The router can't connect with the pppoe settings. I don't know why, but with Auto DHCP setting and my Windows 7 connected with pppoe settings works. EDIT2: I've installed Win 7 on the other PC so I won't have any problem of compatibility betwen them. But I still can't connect the router with PPPOE settings to the internet. It looks like this: http://img641.imageshack.us/img641/9780/capturefba.png If you need any other screens or info, please tell me.

    Read the article

  • Tridion 2011 SP1 Core Service - expose to live server within PROD env

    - by Neil
    We have a requirement to allow our users to submit information about their "projects" - a small piece of text and single image they upload. Ultimately we'll have a listing page of user contributed projects that others can comment on and rate. We've decided to user Tridion's UGC for rating & comments site-wide for this first phase which has got me thinking - UGC is tied to Tridion published pages & components, if we want UGC on our user-submitted projects, they'll have to be created within Tridion as components themselves, not be sat in some custom db table? Is this where the Core Service could come in? My understanding is that the CD Web Service is for retrieval, not for interacting with the Content Manager. Is it OK (!) architecturally to expose the Core Service only to our live application servers so our backend .NET code can create "project components" that can be then be published by editors allowing them to be commented on? Everything sounds pretty neat and tidy apart from the "exposing Core Service to live servers" bit. Without this though I'd have to write a custom way to "transfer" it back over to the Content Manager - maybe like Audience Manager Sync works? Anyone done this before?

    Read the article

  • How do I use key combinations on an axis on a joystick in xorg?

    - by valadil
    I'm using xserver-xorg-input-joystick on Debian Stable so I can use a joystick in place of the mouse. I have mouse movement working correctly, but got stuck trying to add functions for some other keys. These work: #Left stick #Pointer Option "MapAxis1" "mode=relative axis=1.5x" Option "MapAxis2" "mode=relative axis=1.5y" #Right stick #Arrow keys Option "MapAxis4" "mode=relative keylow=Left keyhigh=Right" Option "MapAxis5" "mode=relative keylow=Up keyhigh=Down" But when I try to make key combos (so I can navigate windows and screens in xmonad) I have no luck. #dpad #xmonad focus #up/down toggle window. l/r choose screen. Option "MapAxis8" "mode=relative keylow=Super_L,k keyhigh=Super_L,j" Option "MapAxis7" "mode=relative keylow=Super_L,w keyhigh=Super_L,e" I've also tried Super_R, plain old Super, Meta, and mod4mask, and anything else I can think of. These buttons print the letter, but don't appear to hold down the modifying key. The exception to that is shift. If I specify Shift_L or Shift_R, I get a capital letter. xev indicates that modifier keys are being pressed. If I lower Axis8, I get press Super_L, press k, release k, release Super_L. That looks like it should be working. Maybe this is an xmonad problem and not a joystick driver one? I'm also having trouble with getting an axis to use other XF86 keys: # triggers # song selection Option "MapAxis3" "mode=relative keylow=none keyhigh=XF86AudioForward" Option "MapAxis6" "mode=relative keylow=none keyhigh=XF86AudioBack" That does nothing. Any idea why? If it turns out that this isn't something I can do on an axis, but would work with a button, is there a way to treat my joysticks as buttons? Also, if anyone has suggestions for the other 5 buttons I'll have left after mouse buttons are bound, I'm listening.

    Read the article

  • Not getting gigbit from a gigabit link?

    - by marcusw
    I just upgraded my LAN to gigabit. This is what netperf has to say about things. Before: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.02 94.13 After: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.01 339.15 Only 340 Mbps? What's up with that? Background info: I'm connecting through a gigabit switch to a sheevaplug. I have Cat5e wiring in the walls and the run is maybe 30 feet. If you're not familiar with netperf, it has a tendency to give very stable results and never lie.

    Read the article

  • Installing Numpy locally

    - by Néstor
    I posted this question originally on StackOverflow, but a user suggested I moved it here so here I go! I have an account in a remote computer without root permissions and I needed to install a local version of Python (the remote computer has a version of Python that is incompatible with some codes I have), Numpy and Scipy there. I've been trying to install numpy locally since yesterday, with no success. I successfully installed a local version of Python (2.7.3) in /home/myusername/.local/, so I access to this version of Python by doing /home/myusername/.local/bin/python. I tried two ways of installing Numpy: I downloaded the lastest stable version of Numpy from the official webpage, unpacked it, got into the unpacked folder and did: /home/myusername/.local/bin/python setup.py install --prefix=/home/myusername/.local. However, I get the following error, which is followed by a series of other errors (deriving from this one): gcc -pthread -shared build/temp.linux-x86_64-2.7/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.7 -lptf77blas -lptcblas -latlas -o build/lib.linux-x86_64-2.7/numpy/core/_dotblas.so /usr/bin/ld: /usr/local/lib/libptcblas.a(cblas_dptgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC Not really knowing what this meant (except that the error apparently has to do with the LAPACK library), I just did the same command as above, but now putting LDFLAGS='-fPIC', as suggested by the error i.e., I did LDFLAGS="-fPIC" /home/myusername/.local/bin/python setup.py install --prefix=/home/myusername/.local. However, I got the same error (except that the prefix -fPIC was addeded after the gcc command above). I tried installing it using pip, i.e., doing /home/myusername/.local/bin/pip install numpy /after successfully instaling pip in my local path). However, I get the exact same error. I searched on the web, but none of the errors seemed to be similar to mine. My first guess is that this has to do with some piece of code that needs root permissions to be executed, or maybe with some problem with the version of the LAPACK libraries or with gcc (gcc version 4.1.2 is installed on the remote computer). Help, anyone?

    Read the article

  • network will not work properly after having run TCP optimizer, but safe mode settings work perfectly. how to restore?

    - by michele
    I was experiencing some issues with my connection while playing online and I tried to optimize it by running TCP optimizer on my PC (Windows 7 64bit professional). I thought maybe the situation could improve. but it didn't. actually, I now get an extremely slow page loading time, probably due to a very low RWIN value of 1024. I understand that Windows 7 has a system to automatically adjust the RWIN value when needed. The setting from netsh is "normal" so I guess something else must be wrong. I tried every automatic tool out there to restore Windows' default values, but I had no success. I currently have what should be labeled as "default values" for everything TCP Optimizer initially changed, but the problem persists. The thing is, I just found out that running Windows in safe mode SOLVES the problem completely. The problem is that as soon as I reboot, I get the same issue all over again. So my question is: is there a way to use SAFE MODE network settings in NORMAL mode?

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • how could application installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • How to serve media across home network?

    - by TK Kocheran
    I'm looking to share my media across my home network. Router fully supports running a DLNA server, but I don't know if it'd be better to run the server from my main server computer instead of from the router, as the router would have to operate off of a network share and my server can operate directly off of the files. Here's what I need to serve, in order of importance: ISO 1:1 DVD rips (4-8GB files), MP4/H.264 encoded videos, MKV videos, MP3 files, JPEG/CR2 images. Maybe I'm completely ludicrous for wanting to push full DVD files across my network, but in reality, I would assume that only the parts of the actual file needed (ie: menu, main video payload for main title) would be served at any one time. Plus, encoding takes time and precious disk space, so why not stream it 1:1 ;) Does anyone know of the best way to accomplish this? Main goal is to serve it to Logitech Revue downstairs and secondary goal is to serve it to other computers in the house. For music, I assume I could run a DAAP server, but I don't think that the Revue supports that (and I can't exactly throw together an app that does it just yet).

    Read the article

  • pxe boot dos 7.x / 8.x on modern mainboard without floppy controller

    - by GitaarLAB
    How to pxe boot MS DOS 7.x / 8.x on a modern pc (mainboard without floppy controller) without using an external usb floppy drive? MS DOS 6.22 and earlier or other flavors pxe boot just fine on floppy-less hardware. But DOS 7.x and 8.x renders an error on boot: "Type the name of the Command Interpreter (e.g., C:\WINDOWS\COMMAND.COM) I read somewhere during research this was a rather unknown error, that started to become more common due to the advent of floppy-controller-less hardware. On some hardware (bios dependent) one could plug a usb-floppy-drive in the computer before booting (but that MIGHT also require it to be a "golden floppy drive" (as they where called back then). From a russian site (I read about a year ago and cannot find the hyperlink) MS-Dos versions 6.22 did some-kind of floppy-drive reset during initialization and since it couldn't connect to the floppy-host thus the error. How can I resolve this (without a physical external usb floppy)? Might there be some kind of virtual floppy-driver that could resolve this (for example to be loaded before the dos image loads)? Or could someone point me into the right direction (maybe even a hex-address and some further explanation or something)? I'm using syslinux by the way.

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • Is execution of sync(8) still required before shutting down linux?

    - by Amos Shapira
    I still see people recommend use of "sync; sync; sync; sleep 30; halt" incantations when talking about shutting down or rebooting Linux. I've been running Linux since its inception and although this was the recommended procedure in the BSD 4.2/4.3 and SunOS 4 days, I can't recall that I had to do that for at least the last ten years, during which I probably went through shutdown/reboot of Linux maybe thousands of times. I suspect that this is an anachronism since the days that the kernel couldn't unmount and sync the root filesystem and other critical filesystems required even during single-user mode (e.g. /tmp), and therefore it was necessary to tell it explicitly to flush as much data as it can to disk. These days, without finding the relevant code in the kernel source yet (digging through http://lxr.linux.no and google), I suspect that the kernel is smart enough to cleanly unmount even the root filesystem and the filesystem is smart enough to effectively do a sync(2) before unmounting itself during a normal "shutdown"/"reboot"/"poweorff". The "sync; sync; sync" is only necessary in extreme cases where the filesystem won't unmount cleanly (e.g. physical disk failure) or the system is in a state that only forcing a direct reboot(8) will get it out of its freeze (e.g. the load is too high to let it schedule the shutdown command). I also never do the "sync" procedure before unmounting removable devices, and never hit a problem. Another example - Xen allows the DomU to be sent a "shutdown" command from the Dom0, this is considered a "clean shutdown" without anyone having to login and type the magical "sync; sync; sync" first. Am I right or was I lucky for a few thousands of system shutdowns?

    Read the article

  • Word 2010, Multiple Columns, Vertical center one column only

    - by Nancy N Jones
    I am creating a document with two columns in Microsoft Word 2010. I want the first column to be centered vertically. I want the second column to be on the same page and the vertical placement to be from the top. I highlight my text in the first column that I want centered vertically, then go to Page Layout Margins Custom Margins Layout, you can choose to center the vertical alignment. I have choosen the "Section Start" to be "Column" and also tried "Continuous." In all cases it always shifts all of my second column information to a new page. I don't want my second column text to be on a new page, I want it to be on the same page and vertically aligned from the top--not the center. Am I understanding the functionality of the Section Start on the Layout tab correctly? Maybe the page layout isn't the correct formatting to use. What I am really doing is formatting columns. I haven't found anywhere to format the columns for this. Am I missing some important column formatting features? I know that I can use the paragraph formatting and add space above the first line of text to make it look like it is centered vertically. However, this is a template for a master document and will be changed frequently. I really would like the first column text to be automatically formatted to be centered vertically without having to go in and manually change the space above the paragraph every time. Your assistance would be greatly appreciated.

    Read the article

  • Evernote from vim

    - by juanpablo
    I search a way to edit evernote notes from vim I begin with this #!/bin/bash evernoteDir="$HOME/Library/Application*Support/Evernote/data" dataDir=$(ls -trlh $evernoteDir| tail -n 1| awk '{print $NF}') contentDir="$evernoteDir/$dataDir/content" file=$(ls -trlh $contentDir | tail -n 1| awk '{print $NF}') vim -c 's/div>/div>\r/g' $contentDir/$file/content.html https://gist.github.com/1256416 or maybe create a vim plugin for this ... you have any suggestion? EDIT: for a more simple edition of the evernote note in html format, I make this vim function " Markup function {{{ fun! MkdToHtml() "{{{ " markdown to html silent! execute '%s/ $/<br\/>/g' silent! execute '%s/\*\*\(.*\)\*\*/<b>\1<\/b>/g' silent! execute '%s/\t*###\(.*\)/<H3>\1<\/H3>/g' endf "}}} command! -complete=command MkdToHtml call MkdToHtml() nn <silent> <leader>mm :MkdToHtml<CR> " }}} and a vim function for open the last note edited fun! LastEvernote() "{{{ " a better solution is with evernote api let evernoteDir=expand("$HOME")."/Library/Application*Support/Evernote/data" let dataDir=system("ls -trlh ".evernoteDir."| tail -n 1| awk '{print $NF}'") let contentDir=evernoteDir."/".dataDir."/content" let contentDir=substitute(contentDir,"\n","",'g') let note=system("ls -trlh ".contentDir." | tail -n 1| awk '{print $NF}'") let note=substitute(note,"\n","",'g') sil! exec 'sp '.contentDir.'/'.note.'/content.html' sil! exec '1s/>/>\r/g' sil! exec '%s/<br.*\/>/<br\/>\r/g' sil! exec '%s/<\//\r<\//g' sil! exec 'g/^\s*$/d' normal gg sil! exec '1,4fo' sil! exec '$-1,$fo' endf https://gist.github.com/1289727

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • Radeon 6950 - Garbling of text and graphics in certain Windows only

    - by Greg
    This morning I noticed the text in Gmail (in Firefox 4) looked a little funny (kind of thin, maybe some color fringing). I went to work and thought it might be some ClearType issue or something with the way Direct way that FF4 draws to the screen. When I came back from work (I left the computer on), the problem was much worse - way beyond ClearType nit-picking. The text was barely readable. I opened Chrome and there was no such problem. It seems like only Windows that use hardware acceleration are garbled, and ones that use GDI are not. But, I fired up Dragon Age and didn't notice any problems (I only really looked at the main menu though). Here is a link to a screen shot that illustrates the problem. Notice how the Windows Live Mesh window is completely unreadable, the text in Firefox 4 (left) is pretty bad, while Chrome, the Windows Control Panel, and the task bar are perfectly fine. The fact that the problem shows up in screen shots and that it only happens in certain Windows makes me confident that the problem cannot be with the monitor or DVI cable. I am using the AMD Radeon drivers from 4/27/11. The card I have (MSI Frozr II) came with a slight overclock (810Mhz) out of the box, but it looks like when I'm on the Windows desktop it's not running at full clock (CCC reports 450Mhz). Still, I underclocked it to the stock reference clock (800Mhz) and it made no difference. The idle temperature according to Afterburner is 42-44 Celsius, which seems a tad high but not enough to cause a problem - it's cold to the touch if I open up the machine. What the heck could be causing this? The problem varies in intensity. As we speak I'm in Firefox and things look better than they did earlier - it'll probably get worse again soon. Radeon 6950 (MSI Frozr II), Seasonic X 560, Core i5 2500K at stock clockspeeds, 16GB RAM, Asus P8P67 M Pro

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • FTP Server with advanced features

    - by Nikolas Sakic
    Hi, We supply zone-files to our customers. Some zone files are big about 300MB and some are quite small, maybe like 1MB. We had this issue that someone setup a script to continually download the file. Imagine downloading 300MB file a few hundred times a day. Since, we don't have packet-shaper to throttle the traffic, we need to upgrade ftp server and use add-on modules to limit the download somehow. We currently use proftpd server. Also note that there are different users for different domains - say, if you want to download zone file for .INFO domain, then you use a particular user. That user can't download any other zone's file. This is what we are looking for: Have maximum of 400MB download per user per day. Or even have different download limit for different users per day. Have one connection per user at any time. Max # of connection (non-simultaneous) per user per day is 5. Anyone trying to exceed that gets banned for 24 hours. Has anyone used FTP server with similar restrictions above? Does anyone have any ideas where I can start? Any help would be appreciated. Thanks. -N

    Read the article

  • How can visiting a webpage infect your computer?

    - by Cybis
    My mother's computer recently became infected with some sort of rootkit. It began when she received an email from a close friend asking her to check out some sort of webpage. I never saw it, but my mother said it was just a blog of some sort, nothing interesting. A few days later, my mother signed in on the PayPal homepage. PayPal gave some sort of security notice which stated that to prevent fraud, they needed some additional personal information. Among some of the more normal information (name, address, etc.), they asked for her SSN and bank PIN! She refused to submit that information and complained to PayPal that they shouldn't ask for it. PayPal said they would never ask for such information and that it wasn't their webpage. There was no such "security notice" when she logged in from a different computer, only from hers. It wasn't a phishing attempt or redirection of some sort, IE clearly showed an SSL connection to https://www.paypal.com/ She remembered that strange email and asked her friend about it - the friend never sent it! Obviously, something on her computer was intercepting the PayPal homepage and that email was the only other strange thing to happen recently. She entrusted me to fix everything. I nuked the computer from orbit since it was the only way to be sure (i.e., reformatted her hard drive and did a clean install). That seemed to work fine. But that got me wondering... my mother didn't download and run anything. There were no weird ActiveX controls running (she's not computer illiterate and knows not to install them), and she only uses webmail (i.e., no Outlook vulnerability). When I think webpages, I think content presentation - JavaScript, HTML, and maybe some Flash. How could that possibly install and execute arbitrary software on your computer? It seems kinda weird/stupid that such vulnerabilities exist.

    Read the article

  • Issues regarding internet connectivity

    - by andySF
    Hello. My problem started when Yahoo Messenger stopped connecting. I've tried to see if Internet Explorer was working but will not load any page. The diagnostics of Internet Explorer says that is something wrong with my dns(using just ip of google or yahoo or my local webserver was not working). I use Windows 7 and at the moment i've had Internet Explorer 8 and after a lot of failing updates to ie9 I've successfully install the Romanian version of IE9(now i have ie8 after a system restore). Then I installed the service pack 1. I've done a lot of things and I will try to enumerate them, but my problem persists. Settings from Yahoo Messenger and Internet Explorer are OK. I've try to reset winsock and ip from netsh. I've scanned my pc with spybot, mallwarebytes, Trojan Remover(simplysup), Loaris Trojan Remover, Avast, Nod32, Kaspersky, Bitdefender,alot of registry cleaner including CCleaner and maybe others that I cannot remember now. I reset the registry permissions using subinacl. At a moment my files permissions was set jut to "trusted installer" and I've put the permission back to files and folders using the model of other windows 7 machine. I have try so many things that now i'm stuck in a loop using different security tools to check for problems. Oh, and my virtual machines are working just fine.(I'm using VirtualBox) Please Help. PS, Reinstalling Windows is not an option. Thank you!

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >