Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 49/1038 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Copying Data from another Excel Workbook based on a matching id

    - by Kyle Begeman
    I have 2 workbooks I am working with. One workbook has an id and a category name. The other workbook shows a name and category section that has an id number (but not the actual description). Basically I want to copy the full category text to my current workbook from the old one based on the id number into a new column What kind of formula can I use to check the id number category pair and then copy it into the new workbook in a new column? Any help is great!

    Read the article

  • Set an Excel cell's color based on multiple other cells' colors

    - by Lord Torgamus
    I have an Excel 2007 spreadsheet for a list of products and a bunch of factors to rate each one on, and I'm using Conditional Formatting to set the color of the cells in the individual attribute columns. It looks something like this: I want to fill in the rating column for each item with a color, based on the color ratings of its individual attributes. Examples of ways to determine this: the color of the category in which the item scored worst the statistical mode of the category colors the average of the category ratings, where each color is assigned a numerical value How can I implement any or all of the above rules? (I'm really just asking for a quick overview of the relevant Excel feature; I don't need step-by-step instructions for each rule.)

    Read the article

  • Raspberry Pi based Hadoop cluster

    - by Dmitriy Sukharev
    Is it at least possible to build Hadoop cluster from Raspberry Pi-based nodes? Can such a cluster meet hardware requirements of Hadoop? And if so, how much Raspberry Pi nodes are required to meet requirements? I understand that a cluster from several Raspberry Pi nodes being cheap is not powerful. My purpose is to organize cluster without possibility of loosing personal data from my desktop or notebook, and to use this cluster studying Hadoop. I'd appreciate if you suggest any better ideas of organizing a cheap Hadoop cluster for studying purposes. UPD: I've seen that recommended amount of memory for Hadoop is 16-24GB, multi-core processors, and 1TB of HDD, but it doesn't look like minimal requirements.

    Read the article

  • Sorting Files into Subfolders based on EXIF Date

    - by honestor
    I have a huge directory from a HDD recovery that contains 70000+ JPEG files. I tried playing around with some AppleScripts, that I found, but had no luck. I already installed EXIFtool, which might be useful for this task. The current directory structure is as follows: dir001 - file0001.jpg ... - file9999.jpg dir002 - file0001.jpg ... - file9999.jpg ... dir070 - file0001.jpg - ... - file9999.jpg The files mostly have EXIF Data, but sometimes there are Files without metadata. Now I hope to be able to sort and rename these files into folders based on the date: 1999 - 1999 01 31 - 1999_01_31_-_22_59_59.jpg 2000 - 2000 05 20 - 2000_05_20_-_21_59_59.jpg - 2000_05_20_-_22_59_59.jpg I figured Applescript/Automator might come in handy for this, however every other solution would be welcome, too!

    Read the article

  • Creating a list based on a column

    - by MikkoP
    I need to create a dropdown list in sheet A based on the values in sheet B in column A. I clicked on the A column in B sheet and named it as Models. Then I clicked on the cell in sheet A where I wanted the list to be and selected Data -> Data validation -> Data validation. In the Settings page I selected List in the Allow section, checked Ignore blank and In-cell dropdown. In the Source section I inserted =Models. This way I get all the right values plus a lot of blank values. How do I prevent the blank lines from appearing in the list?

    Read the article

  • Deny IIS6 web request based on URL parameters?

    - by user21146
    I've got a legacy app running a third-party ecommerce system under IIS6. Some spammers recently discovered a bad security vulnerability in one of the store's forms, which are allowing them to send arbitrary emails from our system. Unfortunately, this store "feature" is built into the default.aspx page's code-behind and I have no way to disable it without shutting down the store. How can I filter out URL request with a given querystring parameter? ie, I want to filter out requests to: http://www.mysite.com/store/?id=SendSpam based on the "SendSpam" string.

    Read the article

  • Linux command line based spam checker?

    - by anonymous-one
    Does a command line based spam checker exist? We have created a mailbox at a 3rd party, and unfortunately decided on spam checking 'disabled' in the initial setup. There is no way to re-enable spam checking, the mailbox must be delete (and thus all contents lost) and re-created. Does anything exist where we can pump in either: A) Subject + from + to + body + all other fields. OR B) Raw message dump (headers + body). And the command line will let us know weather the email is possibly spam? Thanks.

    Read the article

  • Name-based virtual hosting in Apache

    - by malvikus
    I'd like to set up name-based virtual hosting in Apache, but I don't have DNS name (local private network). Thus I want to get something like that: http://192.168.0.1/wiki - First virtual host - wiki. http://192.168.0.1/redmine - Second virtual host - redmine. As I suggest I can be achievable by using ServerName option in section of both vhosts. But in Apache documentation has no mention that I can use for FQDN IP-addr. Is it possible? How can I reach my wishes? P.S.: I want to share my sites on the same subnet only. Thus any who can ping me can enter http://my_ip/wiki and get wiki, http://my_ip/redmine and get redmine.

    Read the article

  • Oracle Database 12c is here!

    - by Maria Colgan
    Oracle Database 12c was officially release today and is now available for download. Along with the software release comes a whole new set of collateral that explains in detail all of the new features and functionality you will find in this release. The Optimizer page on Oracle.com has all the juicy details about what you can expect from the Optimizer in Oracle Database12c.  There you will find the following 3 new white papers; What to expect from the Oracle Optimizer in Oracle Database 12c SQL Plan Management with Oracle Database 12c Understanding Optimizer Statistics with Oracle Database 12c Over the coming months we will also present an in-depth series of blog posts on all of the cool new Optimizer features in 12c so stay tuned for that and happy reading! +Maria Colgan

    Read the article

  • Day 2 of Oracle OpenWorld 2012 October 1st

    - by Maria Colgan
    Oracle OpenWorld started yesterday and San Francisco is just buzzing with Oracle folks! If you are attending the conference don't miss the opportunity to chat with the Optimizer development team at one of our technical sessions or at the Oracle Demo grounds. Our first technical session(Session CON8455) happens tomorrow at 1:15pm but the Oracle Optimizer Demo booth opens today. We are located in the database demo grounds, in Moscone South, booth number 3157. Members of the Optimizer team will be available from 9:45am to 6pm today, to answer any Optimizer questions you might have and of course to dole out our limited edition Optimizer bumper stickers! The must have souvenir from this years conference +Maria Colgan

    Read the article

  • HTML5 for IE6.0

    - by marharépa
    Hello! Do you know any method to optimize this HTML Code to IE6 or 7 (or 8) without adding any HTML elements, or the IE is skipping all the HTML5 elements? If i just want to format elements with CSS, - i dont want to use other features - is the document.createElement("nav") DOM element create enough to scam IE and make a plain HTML document? <!DOCTYPE HTML> <head> <meta charset="UTF-8"> <title>title</title> <link type="text/css" rel="stylesheet" href="reset.css"> <link type="text/css" rel="stylesheet" href="style.css"> </head> <body> <header>code of header</header> <nav> code of nav </nav> <section> code of gallery </section> <article> code of article </article> <footer>code of footer</footer> </body> </html> Thank you.

    Read the article

  • GCC, -O2, and bitfields - is this a bug or a feature?

    - by Rooke
    Today I discovered alarming behavior when experimenting with bit fields. For the sake of discussion and simplicity, here's an example program: #include <stdio.h> struct Node { int a:16 __attribute__ ((packed)); int b:16 __attribute__ ((packed)); unsigned int c:27 __attribute__ ((packed)); unsigned int d:3 __attribute__ ((packed)); unsigned int e:2 __attribute__ ((packed)); }; int main (int argc, char *argv[]) { Node n; n.a = 12345; n.b = -23456; n.c = 0x7ffffff; n.d = 0x7; n.e = 0x3; printf("3-bit field cast to int: %d\n",(int)n.d); n.d++; printf("3-bit field cast to int: %d\n",(int)n.d); } The program is purposely causing the 3-bit bit-field to overflow. Here's the (correct) output when compiled using "g++ -O0": 3-bit field cast to int: 7 3-bit field cast to int: 0 Here's the output when compiled using "g++ -O2" (and -O3): 3-bit field cast to int: 7 3-bit field cast to int: 8 Checking the assembly of the latter example, I found this: movl $7, %esi movl $.LC1, %edi xorl %eax, %eax call printf movl $8, %esi movl $.LC1, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp The optimizations have just inserted "8", assuming 7+1=8 when in fact the number overflows and is zero. Fortunately the code I care about doesn't overflow as far as I know, but this situation scares me - is this a known bug, a feature, or is this expected behavior? When can I expect gcc to be right about this? Edit (re: signed/unsigned) : It's being treated as unsigned because it's declared as unsigned. Declaring it as int you get the output (with O0): 3-bit field cast to int: -1 3-bit field cast to int: 0 An even funnier thing happens with -O2 in this case: 3-bit field cast to int: 7 3-bit field cast to int: 8 I admit that attribute is a fishy thing to use; in this case it's a difference in optimization settings I'm concerned about.

    Read the article

  • How can I choose different hints for different joins for a single table in a query hint?

    - by RenderIn
    Suppose I have the following query: select * from A, B, C, D where A.x = B.x and B.y = C.y and A.z = D.z I have indexes on A.x and B.x and B.y and C.y and D.z There is no index on A.z. How can I give a hint to this query to use an INDEX hint on A.x but a USE_HASH hint on A.z? It seems like hints only take the table name, not the specific join, so when using a single table with multiple joins I can only specify a single strategy for all of them. Alternative, suppose I'm using a LEADING or ORDERED hint on the above query. Both of these hints only take a table name as well, so how can I ensure that the A.x = B.x join takes place before the A.z = D.z one? I realize in this case I could list D first, but imagine D subsequently joins to E and that the D-E join is the last one I want in the entire query. A third configuration -- Suppose I want the A.x join to be the first of the entire query, and I want the A.z join to be the last one. How can I use a hint to have a single join from A to take place, followed by the B-C join, and the A-D join last?

    Read the article

  • HTML5 optimalize to IE6.0

    - by marharépa
    Hello! Do you know any method to optimize this HTML Code to IE6 or 7 (or 8) without adding any HTML elements, or the IE is skipping all the HTML5 elements? <!DOCTYPE HTML> <head> <meta charset="UTF-8"> <title>title</title> <link type="text/css" rel="stylesheet" href="reset.css"> <link type="text/css" rel="stylesheet" href="style.css"> </head> <body> <header>code of header</header> <nav> code of nav </nav> <section> code of gallery </section> <article> code of article </article> <footer>code of footer</footer> </body> </html> Thank you.

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Wordpress OptimizePress (Theme) error when creating new page

    - by user594777
    I just installed WordPress newest version, also installed OptimizePress Theme. I am getting the following error when trying to add a new page in Word Press. Any help would be appreciated. Warning: mkdir() [function.mkdir]: Permission denied in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php on line 1578 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php on line 1581 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php on line 1584 Warning: mkdir() [function.mkdir]: Permission denied in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clsblogfields.php on line 174 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clsblogfields.php on line 177 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clsblogfields.php on line 180 Warning: mkdir() [function.mkdir]: Permission denied in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clslpcustomfields.php on line 1725 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clslpcustomfields.php on line 1728 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clslpcustomfields.php on line 1731 Warning: Cannot modify header information - headers already sent by (output started at /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php:1578) in /home/admin/domains/mywebsite.com/public_html/wp-includes/functions.php on line 830 Warning: Cannot modify header information - headers already sent by (output started at /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php:1578) in /home/admin/domains/mywebsite.com/public_html/wp-includes/functions.php on line 831

    Read the article

  • Optimizing simple search script in PowerShell

    - by cc0
    I need to create a script to search through just below a million files of text, code, etc. to find matches and then output all hits on a particular string pattern to a CSV file. So far I made this; $location = 'C:\Work*' $arr = "foo", "bar" #Where "foo" and "bar" are string patterns I want to search for (separately) for($i=0;$i -lt $arr.length; $i++) { Get-ChildItem $location -recurse | select-string -pattern $($arr[$i]) | select-object Path | Export-Csv "C:\Work\Results\$($arr[$i]).txt" } This returns to me a CSV file named "foo.txt" with a list of all files with the word "foo" in it, and a file named "bar.txt" with a list of all files containing the word "bar". Is there any way anyone can think of to optimize this script to make it work faster? Or ideas on how to make an entirely different, but equivalent script that just works faster? All input appreciated!

    Read the article

  • [MySQL] Optimize Query

    - by bordeux
    Hello. I have problem with optimize this query: SET @SEARCH = "dokumentalne"; SELECT SQL_NO_CACHE `AA`.`version` AS `Version` , `AA`.`contents` AS `Contents` , `AA`.`idarticle` AS `AdressInSQL` , `AA` .`topic` AS `Topic` , MATCH (`AA`.`topic` , `AA`.`contents`) AGAINST (@SEARCH) AS `Relevance` , `IA`.`url` AS `URL` FROM `xv_article` AS `AA` INNER JOIN `xv_articleindex` AS `IA` ON ( `AA`.`idarticle` = `IA`.`adressinsql` ) INNER JOIN ( SELECT `idarticle` , MAX( `version` ) AS `version` FROM `xv_article` WHERE MATCH (`topic` , `contents`) AGAINST (@SEARCH) GROUP BY `idarticle` ) AS `MG` ON ( `AA`.`idarticle` = `MG`.`idarticle` ) WHERE `IA`.`accepted` = "yes" AND `AA`.`version` = `MG`.`version` ORDER BY `Relevance` DESC LIMIT 0 , 30 Now, this query using ^ 20 seconds. How to optimize this? EXPLAIN gives this: 1 PRIMARY AA ALL NULL NULL NULL NULL 11169 Using temporary; Using filesort 1 PRIMARY ALL NULL NULL NULL NULL 681 Using where 1 PRIMARY IA ALL accepted NULL NULL NULL 11967 Using where 2 DERIVED xv_article fulltext topic topic 0 1 Using where; Using temporary; Using filesort This is example server with my data: user: bordeux_4prog password: 4prog phpmyadmin: http://phpmyadmin.bordeux.net/ chive: http://chive.bordeux.net/

    Read the article

  • Key-Based SSH Permission denied (publickey) Ubuntu 12-04

    - by user125176
    I have configured sshd to accept key-based ssh logins with LogLevel on DEBUG, and uploaded my public key to ~/.ssh.authorized_keys, where permissions are set as: 700 ~/.ssh 600 ~/.ssh/authorized_keys From root, I can su - USERNAME. From the client I get Permission denied (publicly). From the server Here's how it is telling me that it "Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied". Client protocol version 2.0; client software version OpenSSH_5.2 match: OpenSSH_5.2 pat OpenSSH* Enabling compatibility mode for protocol 2.0 Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 permanently_set_uid: 105/65534 [preauth] list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 [preauth] SSH2_MSG_KEXINIT sent [preauth] SSH2_MSG_KEXINIT received [preauth] kex: client->server aes128-ctr hmac-md5 none [preauth] kex: server->client aes128-ctr hmac-md5 none [preauth] SSH2_MSG_KEX_DH_GEX_REQUEST received [preauth] SSH2_MSG_KEX_DH_GEX_GROUP sent [preauth] expecting SSH2_MSG_KEX_DH_GEX_INIT [preauth] SSH2_MSG_KEX_DH_GEX_REPLY sent [preauth] SSH2_MSG_NEWKEYS sent [preauth] expecting SSH2_MSG_NEWKEYS [preauth] SSH2_MSG_NEWKEYS received [preauth] KEX done [preauth] userauth-request for user USERNAME service ssh-connection method none [preauth] attempt 0 failures 0 [preauth] PAM: initializing for "USERNAME" PAM: setting PAM_RHOST to "USERHOSTNAME" PAM: setting PAM_TTY to "ssh" userauth_send_banner: sent [preauth] userauth-request for user USERNAME service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth] Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 Checking blacklist file /etc/ssh/blacklist.RSA-4096 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied restore_uid: 0/0 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys2 Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys2': Permission denied restore_uid: 0/0 Failed publickey for USERNAME from IPADDRESS port 57523 ssh2 Connection closed by IPADDRESS [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • Cisco Catalyst 4500 Policy Based Routing

    - by Logan
    In order to test a new firewall I just set up I'm trying to implement policy based routing on our core switch. I want traffic from certain vlans to be routed to the new firewall while everything else continues being routed through the old firewall. I was trying to use this guide. Everything from that guide works fine except trying to run the "ip policy route-map" command in the interface configuration mode. IOS is telling me that such a command doesn't exist. A "show ip interface vlan" command says that policy routing is disabled. Any ideas? Output of "show ver": Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500-IPBASEK9-M), Version 12.2(53)SG, RELEASE SOFTWARE (fc3) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2009 by Cisco Systems, Inc. Compiled Thu 16-Jul-09 19:49 by prod_rel_team Image text-base: 0x10000000, data-base: 0x11D1E3CC ROM: 12.2(31r)SG2 Dagobah Revision 226, Swamp Revision 34 RTTMCB2223-1 uptime is 3 years, 22 weeks, 2 days, 19 hours, 28 minutes Uptime for this control processor is 51 weeks, 2 days, 18 hours, 2 minutes System returned to ROM by power-on System restarted at 19:22:02 UTC Tue Jul 12 2011 System image file is "bootflash:cat4500-ipbasek9-mz.122-53.sg.bin" ... cisco WS-C4510R (MPC8245) processor (revision 4) with 524288K bytes of memory. Processor board ID FOX103703W3 MPC8245 CPU at 400Mhz, Supervisor V Last reset from PowerUp 42 Virtual Ethernet interfaces 244 Gigabit Ethernet interfaces 511K bytes of non-volatile configuration memory. Configuration register is 0x2

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • Interactive console based CSV editor

    - by Penguin Nurse
    Although spreadsheet applications for editing CSV files on the console used to be one of the earliest killer applications for personal computers, only few of them and even less documentation about them is still actively maintained. After having done extensive search on the web, manpages and source code, I ended up with the following three applications that all have fundamental drawbacks: sc: abbrev. for spreadsheet calculator; nice tool with vi keybings, but it does not put strings containing the delimiter into quotas when exporting to delimiter separated format and can't import csv files correctly, i.e. all numbers are interpreted as strings GNU oleo: doesn't seem to be actively maintained any longer since 2001 and there are therefore no packages for major linux distributions teapot: offers packages for various operating systems, but uses for example counter-intuitive naming for cells (numbers for row and column, i.e. 11 seems to be intended to be row 1, column 1) and superfluous code for FLTK GUI Various Emacs modes also do not quote strings containing the delimiter well or are require much more typing for entering the scaffold of a table. Therefore I would be very grateful for overcoming one of theses drawbacks or any hints towards another console based CSV editor. It actually needn't do any calculations just editing cells or column- and rowise.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >