Search Results

Search found 36013 results on 1441 pages for 'public fields'.

Page 702/1441 | < Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >

  • Database Server Hardware components (order of importance), CPU speed VS CPU cache vs RAM vs DISK

    - by nulltorpedo
    I am new to database world and would like to know what are crucial hardware specs when it comes to database performance. I have searched the internet and found this so far (In order of decreasing importance): 1) Hard Disk: Get an SSD basically (much more IOPS than spinners) 2) Memory: Get as much as you can afford 3) CPU: For the same $ spent, prefer larger cache size over speed. Are these findings sensible? EDIT: I would like to focus on CPU speed VS CPU cache size. EDIT2: The database is used to store some combination of ints and int arrays with few text fields. There are a lot of Select queries looking for existing entries. If entry is not found, then insert it. I would say most of processing would be trying to find a match across a table with 200 columns and 20k rows. The insert statements are very few. EDIT3: Also, we have a lot of views (basically select queries).

    Read the article

  • Convert from port numbers to protocol names ?

    - by Berkay
    i'm simply using tshark -r botnet.pcap -T fields -E separator=';' -e ip.src -e tcp.srcport -e ip.dst -e tcp.dstport '(tcp.flags.syn == 1 and tcp.flags.ack == 0)' to see the all initiated "legal TCP" connections. However, i need the destination port number conversion to "http" "netbios" etc. i'm not using -n option, but still i get: 128.3.45.128;62259;208.233.189.150;80 This is what i'm trying to get: 128.3.45.128;62259;208.233.189.150;http or 128.3.45.128;62259;208.233.189.150;80;http is better option for me. any idea from tshark users? or any other tool suggestions?

    Read the article

  • What kind of server do I need to handle 10 million requests and mySQL queries a day?

    - by Calvin
    I'm a new bie of server administration and I'm looking for a powerful hosting service to host my new website. This website is basically a back-end of an mobile online game, and it will: handle up to 10 million of HTTPS request and mySQL queries a day store up to 2000 GB file on the hard disk transfer probably 5000 GB data in and out per month it runs on PHP and mySQL have 10 million records in mySQL database, for each record there are 5-10 fields, around 100 bytes each I really don't know what kind of a server I need to handle these requirements, my question is: what cpu/ram do I need for a dedicated server or vps? what hosting companies are able to offer this kind of dedicated server or VPS? what about cloud computing? I've researched Amazon EC2 but it seems complicated to me. And I've contacted Rackspace but strangely they said Cloudsites is not suitable for my requirements. I wonder if there is other cloud hosting company. any other alternative method? thanks very much!

    Read the article

  • Amazon EC2 pem file stopped working suddenly

    - by Jashwant
    I was connecting to Amazon EC2 through SSH and it was working well. But all of a sudden, it stopped working. I am not able to connect anymore with the same key file. What can go wrong ? Here's the debug info. ssh -vvv -i ~/Downloads/mykey.pem [email protected] OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-222-60-78.eu.compute.amazonaws.com [54.229.60.78] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jashwant/Downloads/mykey.pem" as a RSA1 public key debug1: identity file /home/jashwant/Downloads/mykey.pem type -1 debug1: identity file /home/jashwant/Downloads/mykey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1p1 Debian-4 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA d8:05:8e:fe:37:2d:1e:2c:f1:27:c2:e7:90:7f:45:48 debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "54.229.60.78" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host 'ec2-54-222-60-78.eu.compute.amazonaws.com' is known and matches the ECDSA host key. debug1: Found key in /home/jashwant/.ssh/known_hosts:4 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: jashwant@jashwant-linux (0x7f827cbe4f00) debug2: key: /home/jashwant/Downloads/mykey.pem ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: jashwant@jashwant-linux debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/jashwant/Downloads/mykey.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey: RSA 9b:7d:9f:2e:7a:ef:51:a2:4e:fb:0c:c0:e8:d4:66:12 debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). I've already googled everything and checked : Public DNS is same (It hasnt changed), Username is ubuntu as it's a Ubuntu AMI ( Used the same earlier), Permission is 400 on mykey.pem file ssh port is enabled via security groups ( Used the same ealier )

    Read the article

  • match and export subject line and from header with procmail

    - by Nick
    I would like to use procmail or a combination of procmail and formail to take an email message and match a specific keyword in the Subject: line/header, and if matched, export the contents of the Subject: line/header, and the contents of the From: line/header to a perl script. I have researched this, and have not been able to get this working all the way. I am new to procmail, and I assume I need to use a nesting block, but I am not sure how to get the data and then export it. I have been able to match the subject line and export it to a perl script where it is read via ARGV0, but get lost when trying to export and match multiple fields. I tried subscribing to the procmail listserv, but it appears to be broken. Can someone help me with this, please?

    Read the article

  • iPhone Cannot log into WiFi suddenly [closed]

    - by Stanley
    I suddenly get into this strange problem. My iPhone has been using the WiFi setup at my home for more than a year. Suddenly it cannot connect to the internet despite still having the full WiFi signal icon. Have an older iPhone 3 GS and it can still browse the net using the same WiFi. So the wireless router should be working. When I check the non-functioning iPhone, it has the "Router" and the "DNS" entries blank while the functioning iPhone has entries on both of the fields. Also the subnet Mask are different. Please help.

    Read the article

  • Error with FTP since binding via httpcfg

    - by Linda
    I was in a similar posistion to this question and bound two IP addresses using httpcfg. Since doing this ftp does not seem to be working on IIS6 in Windows Server 2003. Any ideas what could be wrong? The command I ran was: httpcfg set iplisten -i xxx.xxx.x.x I get the following when I try to conenct via Filezilla: Error: Connection timed out Error: Failed to retrieve directory listing The log file is returning the following: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-08-17 13:54:05 #Fields: date time c-ip cs-username cs-method cs-uri-stem sc-status sc-win32-status 2009-08-17 13:54:05 91.85.70.17 Client [1]USER Client 331 0 2009-08-17 13:54:05 91.85.70.17 Client [1]PASS - 230 0 In the ftp site settings I have the site pointing to the IP address used using httpcfg and the port set to 21. Update: I can see a directory listing if I connect via the inbuilt commandline ftp client in wondows vista. If I try to connect via a windows explorer I start in the incorrect folder and no files are listed just directories.

    Read the article

  • SQL like group by and sum for text files in command line?

    - by dnkb
    I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. E.g. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. Any suggestions how to do this either with gnuwin32 command line tools or in linux shell? Thanks! glehnia 10 glehnia 22 glehniae 343 glehnii 923 glei 1171 glei 2283 glei 3466 gleib 914 gleiber 652 gleiberg 495 gleiberg 709

    Read the article

  • Word mergefield wildcard not correctly matching

    - by aZn137
    Hello, Below is my mergefield code: { IF { MERGEFIELD Subs_State } = "GA" "blah blah" "{ IF { MERGEFIELD CEOrgStates } = "GA" "blah blah" ""} "} I'm pulling records from a MS Access db. My goal is to check whether a record has Subs_State field matching "GA", or the CEOrgStates has the word "GA" (some records have stuff like "|FL|CA|GA|CT|KY|" (no quotes)). When I merged the docs, Word doesnt seem to be able to match with the wildcards: If I use and compare "*GA" (fields ending with GA), it works; however, the double wildcards "*GA*" dont seem to work at all. Here are the things I’ve tried: Have data in lowercase, then compare with lowercase Have data in lowercase, convert to and then compare with uppercase Do the opposite of the above 2 with uppercase data Use “*GA*” and “*ga*” (no pipe) Use different delimiters Nothing seems to work with the double wildcard matching. What am I doing wrong? Thanks!

    Read the article

  • ICMP Data Field Modified - What does it Mean?

    - by Lucretius
    Normal ICMP Data fields are composed of a pretty standard 32 byte string of alphabet characters. abcdefghijklmnopqrstuvwabcdefghi I have captured a series of ICMP echo requests using WireShark with a modified Data field and I have no idea what it means. (Underscores represent spaces.) abcdefghijklmnopprstuvwxyzabcdefghi abcdefghijklmnoparstuvwxyzabcdefghi __abcdefghijklmnopsrstuvwxyzabcdefghi __abcdefghijklmnopsrstuvwxyzabcdefghi __abcdefghijklmnopwrstuvwxyzabcdefghi __abcdefghijklmnopdrstuvwxyzabcdefghi__ Note: The position of the "q" character The addition of "xyz" The addition of spaces before and after the payload When you look at the position of "q" horizontally it spells "passwd" which is a Linux/Unix command for changing a users password. Any ideas?

    Read the article

  • How to Track Duplicate Downloads

    - by user1089173
    I have a product and we are running a campaign for it to allow users to download a trial version. My questions is there any way to detect multiple downloads from the same computer but behind a proxy. For example if a person uses a proxy server, changes his/her ip and downloads our trial version multiple times from different Ip's, is there a way to detect that it is the same person who's downloading the s/w. We use only Name and Email addresses to be entered before downloading our s/w and that can easily be created. We cannot add any other fields to the form (like phoneno.)

    Read the article

  • Two different sites, same IP, same top-level domain, on IIS 7.5 -- one works and the other displays HTTP 404 error

    - by user717236
    I'm running a Windows 2008 R2 box with IIS 7.5 as the web server. On IIS, I have two websites: mysubsite1.mysite.com and mysubsite2.mysite.com. There is only one IP on the server and both sites share this IP. Here is how I have the bindings configured: mysubsite1.mysite.com works fine. However, mysubsite2.mysite.com gives me the following error: Not Found HTTP Error 404. The requested resource is not found. Now, if I change the Host name field for mysubsite1.mysite.com to blank and restart the web server, both sites work! The question is why is the host name field for the first site causing an HTTP 404 error for the second site when both sites' Host name fields are filled? I would appreciate any insight. Thank you.

    Read the article

  • Excel 2007: Using a time to set XY chart axis scaling like in 2003

    - by CookieOfFortune
    In Excel 2003, when you created a XY chart using time as an axis, you could set the scaling of these axes by typing in the date. In Excel 2007, you have to use the decimal version of the time (eg. How many days since some arbitrary earlier date). I was wondering if there was a way to avoid having to make such a calculation? A developer posted on a blog that this issue would be fixed in a future release, but all versions of Excel 2007 I have tried have not resolved this issue. The relevant quote: Those of you familiar with this technique of converting time to a decimal may recall that Excel 2003 allowed you to enter a date and time like “1/1/07 11:00 AM” directly in the axis option min/max fields and Excel would calculate the appropriate decimal representation. This currently does not work in Excel 2007 but will be fixed in a subsequent release.

    Read the article

  • How to enable forward search in Adobe Reader from TeXnicCenter?

    - by Sergiy Byelozyorov
    I am creating a LaTeX document in TeXnicCenter with LaTeX = PDF profile. There is a feature which allows to open and scroll auto-generated PDF to the paragraph that is under the cursor in TeXnicCenter. This works with Sumatra PDF and the feature is called "Forward search" in Profile settings, "Viewer" tab. I would like to have the same feature with Adobe Reader. Is this possible at all? Do I have to use "command line" or "DDE command" setting? What do I have to fill in "Command", "Server", "Topic" fields in the Profile settings?

    Read the article

  • Can somebody help me install this jBPM based workflow management suite?

    - by Eternal Saint
    1.Its a book Workflow Interface software available at sourceforge http://bookworkflowint.sourceforge.net/ Any instructions on installing and configuring it would be great especially in windows, however I can try Linux specific ones as well. I could not find any installation instructions. I posted this in stackoverflow by mistake and was directed here. 2.Can you guys suggest any good scanning/digitization workflow software (document imaging) that I can adopt to my scanner software? Infact even a simple one would do may be based on hotfolders. I just want to be able to track the uniqueid/barcode of the scanned book, its status so that its not scanned again. It books or manuscripts could be in millions page count. I thought of using some kind of generic bug tracking tool, just track a few fields, dont know if its the right choice Thank you very much

    Read the article

  • SQL Server 2008 Optimization

    - by hgulyan
    I've learned today, if you append to your query OPTION (MAXDOP 0) your query will run on multiple processors and if it's huge query, query will perform faster. I know general guidelines on query optimizations (using indexes, selecting only needed fields etc.), my question is about SQL Server optimization. Maybe changing some options in configurations or anything else. What guidelines are there for SQL Server Optimization? Thank you. P.S. I suppose, this is not the right place to ask server related questions. Should I delete it or maybe it can be migrated to serverfault?

    Read the article

  • Any way to void document upload when user cancels?

    - by Michael Broschat
    We have developed a set of metadata fields for the user to complete during the file upload process (MOSS). What happens is that the user chooses Upload, then specifies the file on his system. Sometimes, when he sees what metadata data is required, he clicks Cancel, knowing that he cannot supply the data at that time. The file uploads, anyway, and is in the library without any attached metadata. Our client finds this unacceptable, but I haven't found a way to cancel the actual upload when the user tells us he no longer wants to do so.

    Read the article

  • Last-modified date not showing up in indexed documents

    - by Jared
    We have tried everything, pushing feeds, setting up fields in "document dates" screen to enable all documents in the index to contain a value for Last-modified date. Nothing seems to make documents retain a "last-modified date" in the index. How does one enable a last-modified date for all documents in the index? Note: the meta value will come from an external xml source (not a database). Followed the google instructions for our gsa version (6.4.0.G.22). Yes I know the GSA version is quite old, we've been told by our google-representative support team themselves, that updating the GSA to the latest version "should" resolve the problem, and by "should" I mean, their GSA did the same thing (no last-modified date) and updating our GSA is another can of worms story :)

    Read the article

  • Why has Google toolbar has stopped working in Firefox 3.6.8?

    - by DanM
    Yesterday, I noticed that my Google Toolbar has stopped working. The toolbar is still there, but it is completely blank (no buttons or input fields, just a gray strip of nothing). In addition to this, tooltips have stopped working. If I disable the toolbar, the tooltips return to normal, so that particular problem definitely seems to be a side effect of the toolbar. I tried disabling all my other add-ons, but that made no difference. I also tried uninstalling and reinstalling Google Toolbar. That made no difference. I haven't tried reinstalling the browser, but I'm reluctant to do that unless absolutely necessary. Am I the only one having this problem? Any ideas how to fix? Note: I'm running Windows XP SP3. I'm using Google Toolbar version 7.1.20100723W.

    Read the article

  • How do I add xen kernel boot parameters in grub2?

    - by Matt
    I know that I can add command line parameters to the grub2 command line by editing /etc/default/grub according to this answer How do I add a boot parameter to grub2 in Ubuntu 10.10? However, that would apply to ALL kernels would it not? How do I apply the command line parameters to specific kernels? i.e. only xen. I'm wanting to append something like: xen-pciback.hide=(06:00.0) I'm guessing I need to add it somewhere in the file: /etc/grub.d/20_linux_xen Which contains: #! /bin/sh set -e # grub-mkconfig helper script. # Copyright (C) 2006,2007,2008,2009,2010 Free Software Foundation, Inc. # # GRUB is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # GRUB is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GRUB. If not, see <http://www.gnu.org/licenses/>. prefix=/usr exec_prefix=${prefix} bindir=${exec_prefix}/bin libdir=${exec_prefix}/lib . ${libdir}/grub/grub-mkconfig_lib export TEXTDOMAIN=grub export TEXTDOMAINDIR=${prefix}/share/locale CLASS="--class gnu-linux --class gnu --class os --class xen" if [ "x${GRUB_DISTRIBUTOR}" = "x" ] ; then OS=GNU/Linux else OS="${GRUB_DISTRIBUTOR} GNU/Linux" CLASS="--class $(echo ${GRUB_DISTRIBUTOR} | tr '[A-Z]' '[a-z]' | cut -d' ' -f1) ${CLASS}" fi # loop-AES arranges things so that /dev/loop/X can be our root device, but # the initrds that Linux uses don't like that. case ${GRUB_DEVICE} in /dev/loop/*|/dev/loop[0-9]) GRUB_DEVICE=`losetup ${GRUB_DEVICE} | sed -e "s/^[^(]*(\([^)]\+\)).*/\1/"` # We can't cope with devices loop-mounted from files here. case ${GRUB_DEVICE} in /dev/*) ;; *) exit 0 ;; esac ;; esac if [ "x${GRUB_DEVICE_UUID}" = "x" ] || [ "x${GRUB_DISABLE_LINUX_UUID}" = "xtrue" ] \ || ! test -e "/dev/disk/by-uuid/${GRUB_DEVICE_UUID}" \ || uses_abstraction "${GRUB_DEVICE}" lvm; then LINUX_ROOT_DEVICE=${GRUB_DEVICE} else LINUX_ROOT_DEVICE=UUID=${GRUB_DEVICE_UUID} fi linux_entry () { os="$1" version="$2" xen_version="$3" recovery="$4" args="$5" xen_args="$6" if ${recovery} ; then title="$(gettext_quoted "%s, with Xen %s and Linux %s (recovery mode)")" else title="$(gettext_quoted "%s, with Xen %s and Linux %s")" fi printf "menuentry '${title}' ${CLASS} {\n" "${os}" "${xen_version}" "${version}" if ! ${recovery} ; then save_default_entry | sed -e "s/^/\t/" fi if [ -z "${prepare_boot_cache}" ]; then prepare_boot_cache="$(prepare_grub_to_access_device ${GRUB_DEVICE_BOOT} | sed -e "s/^/\t/")" fi printf '%s\n' "${prepare_boot_cache}" xmessage="$(gettext_printf "Loading Xen %s ..." ${xen_version})" lmessage="$(gettext_printf "Loading Linux %s ..." ${version})" cat << EOF echo '$xmessage' multiboot ${rel_xen_dirname}/${xen_basename} placeholder ${xen_args} echo '$lmessage' module ${rel_dirname}/${basename} placeholder root=${linux_root_device_thisversion} ro ${args} EOF if test -n "${initrd}" ; then message="$(gettext_printf "Loading initial ramdisk ...")" cat << EOF echo '$message' module ${rel_dirname}/${initrd} EOF fi cat << EOF } EOF } linux_list=`for i in /boot/vmlinu[xz]-* /vmlinu[xz]-* ; do basename=$(basename $i) version=$(echo $basename | sed -e "s,^[^0-9]*-,,g") if grub_file_is_not_garbage "$i" && grep -qx "CONFIG_XEN_DOM0=y" /boot/config-${version} 2> /dev/null ; then echo -n "$i " ; fi done` xen_list=`for i in /boot/xen*; do if grub_file_is_not_garbage "$i" ; then echo -n "$i " ; fi done` prepare_boot_cache= while [ "x${xen_list}" != "x" ] ; do list="${linux_list}" current_xen=`version_find_latest $xen_list` xen_basename=`basename ${current_xen}` xen_dirname=`dirname ${current_xen}` rel_xen_dirname=`make_system_path_relative_to_its_root $xen_dirname` xen_version=`echo $xen_basename | sed -e "s,.gz$,,g;s,^xen-,,g"` echo "submenu \"Xen ${xen_version}\" {" while [ "x$list" != "x" ] ; do linux=`version_find_latest $list` echo "Found linux image: $linux" >&2 basename=`basename $linux` dirname=`dirname $linux` rel_dirname=`make_system_path_relative_to_its_root $dirname` version=`echo $basename | sed -e "s,^[^0-9]*-,,g"` alt_version=`echo $version | sed -e "s,\.old$,,g"` linux_root_device_thisversion="${LINUX_ROOT_DEVICE}" initrd= for i in "initrd.img-${version}" "initrd-${version}.img" \ "initrd-${version}" "initrd.img-${alt_version}" \ "initrd-${alt_version}.img" "initrd-${alt_version}"; do if test -e "${dirname}/${i}" ; then initrd="$i" break fi done if test -n "${initrd}" ; then echo "Found initrd image: ${dirname}/${initrd}" >&2 else # "UUID=" magic is parsed by initrds. Since there's no initrd, it can't work here. linux_root_device_thisversion=${GRUB_DEVICE} fi linux_entry "${OS}" "${version}" "${xen_version}" false \ "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}" if [ "x${GRUB_DISABLE_RECOVERY}" != "xtrue" ]; then linux_entry "${OS}" "${version}" "${xen_version}" true \ "single ${GRUB_CMDLINE_LINUX}" "${GRUB_CMDLINE_XEN}" fi list=`echo $list | tr ' ' '\n' | grep -vx $linux | tr '\n' ' '` done echo "}" xen_list=`echo $xen_list | tr ' ' '\n' | grep -vx $current_xen | tr '\n' ' '` done

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • Dsquery nested groups

    - by Doctor Trout
    Hi there, How would I write a dsquery to get a list of all the members of a d-list, expanding any nested groups to get the members of those groups? I've written this: dsquery * -filter "(&(memberOf=cn=...))" -r -limit 0 -attr CUSTOMFIELD sAMAccountName displayName > export.txt but returns nested d-lists and I want to expand these. I then tried this: dsquery group -samid "NAME | dsget group -members -expand > export.txt But this just lists the OU of each member and I want to get the Account Name and a custom field returned. Is there any way, either of chosing which fields to return from dsget or to epxand dsquery to show nested group membership? Thanks.

    Read the article

  • Powershell BitLocker Recovery Key

    - by TheNoobofNoobs
    I'm trying to get a list of all computers that have a bit locker recovery key (or information for that matter) populated in their respective fields in AD. I am unable to even start on a script as I don't know where to begin. I did find this online but it doesn't appear to be working. foreach($comp in get-adcomputer -filter *) { get-adobject -filter 'objectclass -eq "msFVE-RecoveryInformation"' - searchbase $comp.distinguishedname -properties msfve-recoverypassword,whencreated | sort whencreated | select msfve-recoverypassword -last 1 } Export-Csv "FilePath.csv" Any ideas as to how I can go about this. Running Windows 7, Powershell 3.0, Windows Server 2008 R2.

    Read the article

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

  • How to mail merge a hyperlink in Microsoft Word or Publisher 2010

    - by hjoelr
    I am trying to do an e-mail merge in Microsoft Publisher 2010 (which appears to do mail merging like Microsoft Word) and I'm wanting a merged email address to automatically be hyperlinked in the resulting email. For example, one of the merge fields could be "EmailAddress" with an example address being [email protected]. In the document, I would want the merge field "EmailAddress" to display as the default text in an hyperlink and also set the target of the hyperlink to "mailto:EmailAddress" (eg. mailto:[email protected]). I can't figure out how to get Publisher 2010 to do that. I would think that it's possible, though. Any help or pointers would be greatly appreciated!

    Read the article

< Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >