Search Results

Search found 4099 results on 164 pages for 'bulk export'.

Page 46/164 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • SQL Developer Q&A from ODTUG Tips & Tricks Webcast

    - by thatjeffsmith
    Another great webcast yesterday – if you’re a paying member of ODTUG you can watch the show for yourself in their archives. If not, you can get my slide deck off of SlideShare. About 150 of you brave souls sat through an entire hour of me talking and then 10 more minutes of Q&A. We went through everything rapid-fire style, so I thought I would post the questions and my refined answers here for your perusal. In the order in which I received them: You showed the preference to choose between resultsets in same tab or ain a new tab. I understand that we can not have it both using different hotkeys? For example: F5 run and resultset to same tab, ctrl-f5 same but to new tab? Sometimes you want the one other times the other. The questioner is asking about this preference, Tools Preferences Database Worksheet ‘Show query results in new tabs.’ This is an all or nothing proposition. But, there’s another, perhaps better way: the document PINs. If you have a result set you don’t want to lose, ‘pin it.’ Pin multiple result sets or plans for review and comparisons. You mentioned that sometimes it’s hard to remember where a certain preference is. I agree. So enhancement request: add a search-box to the preferences window. Maybe like in, for example, UltraEdit. It shows you all preferences containing your search criteria. Actually, we do have a search mechanism type the search string, we auto-filter the preferences Is there a version of SQL Developer that will connect to an 8i database (Yes, I realize how old that database version is!) Sorry, no. We also don’t have a version that will run on Windows 3.11 for Workgroups…probably. How do we access your blog? Carefully, and with much trepidation. When you’re ready, go to http://www.thatjeffsmith.com Is there a way to get good formatting with predefined settings? I believe the questioner is referring to the script output a la SQL*Plus formatting commands. Yes, there is. You can build your formatting commands into your login.sql script, and those will be applied for your script execution sessions. Example here. Why this version 4.0 doesn’t support external plugins? It does, it just requires the plugin developer to re-factor it for OSGi. This came about when we updated the JDeveloper framework to the later 11g/12c stuff. Any change in hookup with SVN? The only change with Subversion is that internally we’re using 1.7 stuff now. You can use SQLDev to work with a 1.8 SVN server, but if you get a working copy with a 1.8 client SQLDev won’t be able to do anything with it… Command line utilities ? improvements Yes! The long answer is here. Is that a Hint or a Comment?? /*CSV*/ It’s a comment – the database won’t recognize it, but SQLDev does when it goes through our statement pre-processor. We’ll redirect the output through our CSV formatter before displaying the results in the Script Output panel. That’s why this will ONLY work in SQL Developer. Are you selecting “”Run Script”" to get that CSV or HTML output, rather than “”Run Statement”"? Yes, the formatter hints like the CSV one mentioned above only make sense in a script output panel vs a grid. How do you save relational models once they’re defined? I’ve had trouble with setting one up, “”saving”" it, then the design work I did is longer there when loading it later. File – Data Modeler – Save. If you’re running the Modeler inside of SQL Developer, the menu’ing interface can get a bit tricky. That’s why I recommend using the stand along if you’re doing anything with a model that takes more than 5 minutes. See how the Data Modeler menus are folded up under the SQL Dev menus? Can u unplug and plug into another container in a database with only sqldeveloper? Yes, you can ‘Detach’ a multitentant 12c Database ‘pluggable’ and plug it into another instance. You have the option to copy or move the files. This isn’t a trivial operation, pay attention Can you run APEX code directly on the adopter? No, at least not as I understand your question. Give me an example and I can give you a better example. Is there a way that when u click on a particular table it wouldn’t show the table with the info but just to see the columns underneath clicking on the node? Yes, another one of my tips! Disable Tools Preferences Datbase ObjectViewer ‘Open Object on Single Click.’ Is there a patch to allow a double click on a procedure on an open package body to take you to that procedure in the editor? This has been fixed for EA3 – to be released soon. Can you open the spec with the body? You can open the spec or the body, and then also open the other. But you can’t open both with a single click. So if you want you can set it to CSV but can you also see it as a regular result set in rows and then click in the results to export to excel? If you run your query as a statement with Ctrl-Enter, you can send the data to Excel via the Export dialog. Will it do intellisense like using the alias and pop up the column, object names? Yes! You can select more than one column… Can a DBA turn off items from a high level for users so the only thing they can perform would be selects? A DBA should turn things ON, not OFF. Create a user with only CONNECT and required SELECT privs and you’re good to go, regardless of which application they are using. I use PL/SQL Developer from allround automations and was SQL Developer illiterate and now I like this for myself as a DBA. Now I get to train developers on this tool since they have been asking how to use this tool. Thank you. No, THANK YOU! Can you run multi queries in the worksheet after you added it to the worksheet? Yes, highlight what you want to run, and hit Ctrl-Enter. Can you export the result sets to excel, etc. Yes. In version 4.0 and going forward, I recommend you use the XLSX option for exports. It will run faster and consume much, much less memory. Will this be available after the webinar? If you are a ODTUG member, check out the webinar recordings in the archives. That’s worth the $99 right there. Ask your boss if they have $99 in their training budget for you. If not, maybe time to look for another job? Can you run command lines from this tool? Like executes without issuing a command line prompt? Ok, I’m stumped on this one. Not sure what you’re asking. You can setup external tools under the Tools menu, and from there you could probably rig what you’re looking for, but I’m not sure what you’re looking for… This maybe?Where and when to put the program Is there any way to save a copy database command set (certain tables/views etc) in a script? Yes! Create a cart with the objects you want to be used in the Copy. Then use the new command-line interface to kick off SQL Developer to do the copy of those said objects. How can we export the preference and then import them into different or same version of SQL Developer ? Today, there’s no interface for this. But you could copy the files around manually…Kris Rice has a cool idea where you can set your preferences to be saved to your local drop box folder and then you can use SQL Developer from anywhere with the same preferences What happens to SQL*Plus commands like COL & BREAK Nothing. Those are not currently supported. Is there a place where all “”hotkey”" functionality is listed? thanks Yes. Tools – Preferences – Shortcut Keys. And you can change them! Any tips for the DBA side of things? will the SQL generated for objects have more information (e.g. user privileges) in v4? You can get this now. In Tools – Preferences – Database – Utilities – Export, check ‘Grants.’ Voila! You now have the code necessary to recreate your object privileges Is there a limit on the number of rows that could be imported / exported from/to excel ? The only hard-coded limit lies in Excel. For best performance, use v4 and XLSX formats for Exports. Is there a way to see/watch active sessions to see current SQL and the explain plan being used, etc. Kind of like that frog product. Cough, yes. Tools – Monitor Sessions. Click on session, see SQL and plan. The plan was added in v4. If you’re not in version 4, use the Reports – Active Sessions to get the plans. In the DBA section is there a way to manage say tablespaces to add data files, shrink, edit profiles, etc. Yes, we support all of that. View – DBA. Connect, go to the Storage node. Are you (Jeff) available for a live presentation at our Oracle User Group here in Indiana? Maybe. Email me and we’ll see, [email protected] Where do I go to download sql developer 4.0? The Internet of course! Can you directly edit query results? Nope. But what I think you’re asking is, can I edit the data in the tables that are reflected in my query results? You can change the query results by changing your query of course. Or this. Can you show html example? Sure. I’d embed the HTML here, but it’s a lot of code, try it for yourself! How can I quickly close many SQL worksheet windows, but not all? Window – Documents. Multi-select, hit the ‘Close Document(s)’ button. What does the vertical red line denote? That’s the margin. Tells you when you’ve typed too far and it’s time for a carriage return. Did DBA/Database Status/Instance Viewer make it officially into 4.0? It was sort-of included in the first EA. I have NO idea what you’re talking about, WINK-WINK. No, it’s not in v4.0. Is there a “”handy”" way to debug trigger code? Yes, open your trigger. Hit the debug button. Works great as long as it’s a DML trigger. Will you make your presentation file available for us ( in PPT and/or PDF format ) ? It’s on SlideShare. How do you get SqlDeveloper to escape ‘ correctly when you use the wizard to export data as insert statements? If it’s not doing that, it’s a bug. I’ll take a look at that scenario ASAP.

    Read the article

  • exportfs: internal: no supported addresses in nfs_client

    - by Brian
    I am trying to set up a NFS server on an AWS instance running SLES11. After installing nfs-utils, I tried to export a test share. Here is what my /etc/exports file looks like: /opt/share1 ec2-50-16-224-79.compute-1.amazonaws.com(rw,async) export -ar returns the following message: exportfs: internal: no supported addresses in nfs_client domU-12-31-38-04-7E-02.compute-1.internal:/opt/share1: No such file or directory Any idea what the no supported addresses error means? Thanks!

    Read the article

  • HISTCONTROL=ignoreboth not working Debian Lenny

    - by Mike
    Can anybody confirm if by setting the the following environmental variables under debian lenny will make previous history entries not to be saved. GNU bash, version 3.2.39(1)-release export HISTCONTROL=ignoreboth export HISTSIZE=500 I have added them to my /etc/bash.bashrc but I keep getting repeated commands. Thanks

    Read the article

  • How to set PKG_CONFIG_PATH

    - by michael
    Hi, I need to build a newer version of 'gstreamer' & 'glib' in my ubuntu 8.04. So I get the source, and run './configure --prefix=/home/michael/bin; make; make install' But what do i need to set PKG_CONFIG_PATH to in other for my another program's configure to see it? I tried export PKG_CONFIG_PATH=/home/michael/bin/lib/pkgconfig export PKG_CONFIG_PATH=/home/michael/bin/lib But for some reasons, Webkit's configure still cant' find both libraries.

    Read the article

  • Unable to create a permanent alias in Mac OS

    - by Alex
    In Linux I can well to add alias to bashrc and it will become a permanent alias. In Mac OS I tried to do the same thing: vim ~/.bashrc export PATH="$PATH:$HOME/.rvm/bin" # Add RVM to PATH f or scripting alias prj="cd ~/Documents/projects" ### Added by the Heroku Toolbelt export PATH="/usr/local/heroku/bin:$PATH" That being said, I got this: $ alias alias rvm-restart='rvm_reload_flag=1 source '\''/Users/alex/.rvm/scripts/rvm'\''' So where is my prj alias? I rebooted the laptop but nothing has changed.

    Read the article

  • NFS issue brings down entire vSphere ESX estate

    - by growse
    I experienced an odd issue this morning where an NFS issue appeared to have taken down the majority of my VMs hosted on a small vSphere 5.0 estate. The infrastructure itself is 4x IBM HS21 blades running around 20 VMs. The storage is provided by a single HP X1600 array with attached D2700 chassis running Solaris 11. There's a couple of storage pools on this which are exposed over NFS for the storage of the VM files, and some iSCSI LUNs for things like MSCS shared disks. Normally, this is pretty stable, but I appreciate the lack of resiliancy in having a single X1600 doing all the storage. This morning, in the logs of each ESX host, at around 0521 GMT I saw a lot of entries like this: 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4cf9a8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4dc9e8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4d3fa8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4de0a8 3 [....] 2011-11-30T06:16:07.042Z cpu0:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:17:01.459Z cpu2:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:25:17.887Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000a4c2b28 3 2011-11-30T06:27:16.063Z cpu3:4011)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:35:30.827Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:36:37.953Z cpu6:2054)NFS: 292: Restored connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:40:08.242Z cpu6:2054)NFSLock: 608: Stop accessing fd 0x41000a4c3e68 3 2011-11-30T06:40:34.647Z cpu3:2051)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:44:42.663Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:44:53.973Z cpu0:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:51:28.296Z cpu5:2058)NFSLock: 608: Stop accessing fd 0x41000ae3c528 3 2011-11-30T06:51:44.024Z cpu4:2052)NFSLock: 568: Start accessing fd 0x41000ae3b8e8 again 2011-11-30T06:56:30.758Z cpu4:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:56:53.389Z cpu7:2055)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T07:01:50.350Z cpu6:2054)ScsiDeviceIO: 2316: Cmd(0x41240072bc80) 0x12, CmdSN 0x9803 to dev "naa.600508e000000000505c16815a36c50d" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. 2011-11-30T07:03:48.449Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000ae46b68 3 2011-11-30T07:03:57.318Z cpu4:4009)NFSLock: 568: Start accessing fd 0x41000ae48228 again (I've put a complete dump from one of the hosts on pastebin: http://pastebin.com/Vn60wgTt) When I got in the office at 9am, I saw various failures and alarms and troubleshooted the issue. It turned out that pretty much all of the VMs were inaccessible, and that the ESX hosts either were describing each VM as 'powered off', 'powered on', or 'unavailable'. The VMs described as 'powered on' where not in any way reachable or responding to pings, so this may be lies. There's absolutely no indication on the X1600 that anything was awry, and nothing on the switches to indicate any loss of connectivity. I only managed to resolve the issue by rebooting the ESX hosts in turn. I have a number of questions: What the hell happened? If this was a temporary NFS failure, why did it put the ESX hosts into a state from which a reboot was the only recovery? In the future, when the NFS server goes a little off-piste, what would be the best approach to add some resilience? I've been looking at budgeting for next year and potentially have budget to purchase another X1600/D2700/disks, would an identical mirrored disk setup help to mitigate these sorts of failures automatically? Edit (Added requested details) To expand with some details as requested: The X1600 has 12x 1TB disks lumped together in mirrored pairs as tank, and the D2700 (connected with a mini SAS cable) has 12x 300GB 10k SAS disks lumped together in mirrored pairs as sastank zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 errors: No known data errors pool: sastank state: ONLINE scan: scrub repaired 0 in 74h21m with 0 errors on Wed Nov 30 02:51:58 2011 config: NAME STATE READ WRITE CKSUM sastank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t14d0 ONLINE 0 0 0 c7t15d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t16d0 ONLINE 0 0 0 c7t17d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t18d0 ONLINE 0 0 0 c7t19d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t20d0 ONLINE 0 0 0 c7t21d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t22d0 ONLINE 0 0 0 c7t23d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t24d0 ONLINE 0 0 0 c7t25d0 ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scan: scrub repaired 0 in 17h28m with 0 errors on Mon Nov 28 17:58:19 2011 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 c7t6d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t8d0 ONLINE 0 0 0 c7t9d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t10d0 ONLINE 0 0 0 c7t11d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t12d0 ONLINE 0 0 0 c7t13d0 ONLINE 0 0 0 errors: No known data errors The filesystem exposed over NFS for the primary datastore is sastank/VMStorage zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 45.1G 13.4G 92.5K /rpool rpool/ROOT 2.28G 13.4G 31K legacy rpool/ROOT/solaris 2.28G 13.4G 2.19G / rpool/dump 15.0G 13.4G 15.0G - rpool/export 11.9G 13.4G 32K /export rpool/export/home 11.9G 13.4G 32K /export/home rpool/export/home/andrew 11.9G 13.4G 11.9G /export/home/andrew rpool/swap 15.9G 29.2G 123M - sastank 1.08T 536G 33K /sastank sastank/VMStorage 1.01T 536G 1.01T /sastank/VMStorage sastank/comstar 71.7G 536G 31K /sastank/comstar sastank/comstar/sql_tempdb 6.31G 536G 6.31G - sastank/comstar/sql_tx_data 65.4G 536G 65.4G - tank 4.79T 578G 42K /tank tank/FTP 269G 578G 269G /tank/FTP tank/ISO 28.8G 578G 25.9G /tank/ISO tank/backupstage 2.64T 578G 2.49T /tank/backupstage tank/cifs 301G 578G 297G /tank/cifs tank/comstar 1.54T 578G 31K /tank/comstar tank/comstar/msdtc 1.07G 579G 32.8M - tank/comstar/quorum 577M 578G 47.9M - tank/comstar/sqldata 1.54T 886G 304G - tank/comstar/vsphere_lun 2.09G 580G 22.2M - tank/mcs-asset-repository 7.01M 578G 6.99M /tank/mcs-asset-repository tank/mscs-quorum 55K 578G 36K /tank/mscs-quorum tank/sccm 16.1G 578G 12.8G /tank/sccm As for the networking, all connections between the X1600, the Blades and the switch are either LACP or Etherchannel bonded 2x 1Gbit links. Switch is a single Cisco 3750. Storage traffic sits on its own VLAN segregated from VM machine traffic.

    Read the article

  • TomCat starts, but does not load properly

    - by user37136
    Hey guys, I've been working on this for a day now and still don't know what's wrong. I am essentially building a second environment for our web and app server. I got apache to load up just fine, but tomcat is proving to be difficult. It appears to start and load just fine, but when it comes to loading our application, its just got stuck for 2-5 minutes and then shut down. Here is the log on the original machine where it works fine: 2010-02-12 11:52:40,506 INFO Web application servlet context is initializing... 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobType=[{1,Undefined}, {100,Completion}, {200,Plugging}, {300,R+M}, {400,Workover}, {500,Swab - tubing}, {600,Swab - fluid}] 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobTaskType=[{1,Undefined}, {100,Rod part}, {200,Tubing leak}, {300,Pump change}, {400,Stripping job}, {500,Long stroke}, {600,A/L optimization}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_wellType=[{1,Undefined}, {100,Rod pump}, {200,ESP}, {300,Injector}, {400,PC pump}, {500,Co-Rod}, {600,Flowing}, {700,Storage}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_assetType=[{1,Rig}, {100,Disabled rig}] 2010-02-12 11:52:40,542 DEBUG Servlet context attribute added: select_state=[{AL,Alabama}, {AK,Alaska}, {AZ,Arizona}, {AR,Arkansas}, {CA,California}, {CO,Colorado}, {CT,Connecticut}, {DE,Delaware}, {FL,Florida}, {GA,Georgia}, {HI,Hawaii}, {ID,Idaho}, {IL,Illinois}, {IN,Indiana}, {IA,Iowa}, {KS,Kansas}, {KY,Kentucky}, {LA,Louisiana}, {ME,Maine}, {MD,Maryland}, {MA,Massachusetts}, {MI,Michigan}, {MN,Minnesota}, {MS,Mississippi}, {MO,Missouri}, {MT,Montana}, {NE,Nebraska}, {NV,Nevada}, {NH,New Hampshire}, {NJ,New Jersey}, {NM,New Mexico}, {NY,New York}, {NC,North Carolina}, {ND,North Dakota}, {OH,Ohio}, {OK,Oklahoma}, {OR,Oregon}, {PA,Pennsylvania}, {RI,Rhode Island}, {SC,South Carolina}, {SD,South Dakota}, {TN,Tennessee}, {TX,Texas}, {UT,Utah}, {VT,Vermont}, {VA,Virginia}, {WA,Washington}, {WV,West Virginia}, {WI,Wisconsin}, {WY,Wyoming}, {ACO,Atlantic Coast Offshore}, {FOAK,Federal Offshore Alaska}, {NGOM,Northern Gulf of Mexico}, {PCO,Pacific Coastal Offshore}] 2010-02-12 11:52:40,542 INFO KeyviewContextMonitor.contextInitialized: Loaded drop-down lists:com/key/portal/web/common/lists.properties 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.SERVLET_MAPPING=*.do 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.ACTION_SERVLET=org.apache.struts.action.ActionServlet@155d578 2010-02-12 11:52:41,939 DEBUG Servlet context attribute added: org.apache.struts.action.MODULE=org.apache.struts.config.impl.ModuleConfigImpl@e08e9d 2010-02-12 11:52:41,962 DEBUG Servlet context attribute added: org.apache.struts.action.FORM_BEANS=org.apache.struts.action.ActionFormBeans@b31c3c 2010-02-12 11:52:41,967 DEBUG Servlet context attribute added: org.apache.struts.action.FORWARDS=org.apache.struts.action.ActionForwards@102c646 2010-02-12 11:52:41,973 DEBUG Servlet context attribute added: org.apache.struts.action.MAPPINGS=org.apache.struts.action.ActionMappings@127276a 2010-02-12 11:52:41,974 DEBUG Servlet context attribute added: org.apache.struts.action.MESSAGE=org.apache.struts.util.PropertyMessageResources@18cae13 2010-02-12 11:52:41,984 DEBUG Servlet context attribute added: org.apache.struts.action.PLUG_INS=[Lorg.apache.struts.action.PlugIn;@f875ae 2010-02-12 11:52:46,816 INFO Sucessfully loaded application properties com/key/core/properties/application On my second environment, it didn't execute the last line. I start tomcat with the exact same command line !/bin/ksh export JAVA_HOME=/app/java export CATALINA_HOME=/app/tomcat export CATALINA_BASE=/app/keyview/appserver CATALINA_OPTS=" -Xms128m -Xmx800m -Dapplication.props=com/key/core/properties/application -Dlog4j.configuration=com/key/core/log/log4j.xml -Djava.awt.headless=true -Dlog4j.debug" export CATALINA_OPTS ${CATALINA_HOME}/bin/startup.sh I bolded the line that I think are in error. Thanks

    Read the article

  • How can I invoke a function in bash shell script

    - by sufery
    !/bin/bash one_func(){ echo 'abcd' } echo $(one_func) echo one_func the end I just wonder the distinction calling the function between $(one_function) and one_function in bash shell script. When I set the variable "PS1" in ~/.bashrc, I can't invoke the function by one_func e: export PS1="\n[\e[31m]\$(one_func)" it work export PS1="\n[\e[31m]one_func" it doesn't work

    Read the article

  • HISTCONTROL=ignoreboth not working debian lenny

    - by Mike
    Can anybody confirm if by setting the the following env variables under debian lenny will make previous history entries not to be saved. GNU bash, version 3.2.39(1)-release export HISTCONTROL=ignoreboth export HISTSIZE=500 I have added them to my /etc/bash.bashrc but I keep getting repeated commands. Thanks

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

  • Sharing VirtualBox snapshots

    - by JesperE
    Is it possible to "share" a VirtualBox snapshot? I have a "baseline" VirtualBox machine, and I would like to be able to take a snapshot, and send it to another user which has the same baseline machine. The scenario is that the baseline machine is used for testing, and I want to allow testers to create snapshots which describe a certain system state, and send that snapshot to developers to further examination. EDIT: To clarify, I would like to be able to export snapshots "incrementally" without having to export the entire machine as an appliance.

    Read the article

  • Intermediate certificates on NLB load balanced servers

    - by MrVimes
    I am fairly sure I know how to install the 'main' certificate on load balanced servers (install on one, export, import to the others) but I'm not quite sure what to do about the intermediate certificate (the one you install using the certificates snap in in mmc) Do I manually install it using mmc on each server? or is there a similar process involved to the main cert (install, then export, then import on the others?)

    Read the article

  • Mendeley - Exporting A Collection Straight to PDF or HTML

    - by user34605
    I'm in love with Mendeley Desktop for organizing my papers. I'm working on a survey paper and I want to send an updated list of the papers in the survey to my advisors. The best solution would be to get them to sign up for Mendeley, but they're not as techy as I am. Right now I'm doing the following: 1) export Mendeley to Bibtex 2) open Bibtex file in Jabref and export to HTML Is there a way to cut out Jabref from this? Thanks!

    Read the article

  • Set Final Cut Express Sequence Options to 970p30

    - by Maccaius
    Hi I have an project in FCE which was recorded in MPEG-4 video in 1280 x 960 (1,4MB/s) and whenever I import those mov files to FCE thez show just fine. but when i add the clips to the timeline (sequence) the quality is cropped - instead of HD i can onlz export the timeline with quicktime conversion and a resolution of 720 x 480. can anybody help me out and tell me how i can export the timeline in full HD quality (as the raw material itself is)?

    Read the article

  • Locale misconfig. Debian

    - by JakeTheFish
    perl -e 'print "Hello\n";' perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Hello I'v tried to do export LC_ALL=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 And it workis, till I log out. Is there any permanent solution?

    Read the article

  • Exporting to PNG from Adobe Illustrator cutting off edges

    - by Luther Baker
    I created a 40px x 40px image in Adobe Illustrator CS4. I saved as an .ai file and then tried to export as a PNG. Adobe Illustrator automatically crops the background and tightens the export to a rect around all the objects which if fine. In this case, I am not working edge to edge so my image is not quite 40px wide. But, unfortunately, Illustrator is not exporting the entire image. I end up with an image that is 34px wide. Indeed, the icon I draw starts on the left hand side but the right edge of my object cut off. Any ideas why this is happening? I can't imagine Illustrator CS4 can't correctly to export to PNG.

    Read the article

  • High-quality ERD generator for PostgresQL under Linux?

    - by Dave Jarvis
    Background MySQL Workbench can produce appealing and high-quality ERDs such as: Research I have not found a tool that even comes close for PostgreSQL. Tools I have found: dbVisualizer - Yellow squares. AquaFold - Yellow squares. SQL Developer - Coloured squares. Dia - Coloured squares. SchemaBank - Can't export to PNG; looks okay, nothing stellar. SchemaSpy - XML export makes it possible to write an XSL skin... Gliffy - Incompatible Flash version. Druid - No. pgAdmin3 - Not applicable? phpPgAdmin - Couldn't login without a 30-minute configuration battle. Requirements Looking for an ERD tool: Visually stunning by default Can reverse-engineer a PostgreSQL (or JDBC-compliant) database Runs on Linux (or under WINE) Export high-resolution PNG (or SVG) FOSS

    Read the article

  • Convert Wordpress.com Hosted Blog to BlogEngine.NET

    - by Chris Marisic
    I'm looking at what is needed to move from wordpress.com to a BlogEngine.NET or similar blog. I've seen a tool for replacing export.php so that it will export your wordpress site in BlogML format so it can easily be imported into BlogEngine.NET, however I'd really not want to have to setup php/wordpress just so I can import a back up from wordpress.com and then use the export from my local wordpress to have a BlogML file. Are there any tools that will convert the wordpress file? Is there a different blog that will natively import the wordpress file? Edit: For the question about other blog providers, I am open to them as long as they are .NET based, preferably C#.

    Read the article

  • How to generate a client certificate using a third party CA-NOT Self Signed CA

    - by Bryan
    I am trying to trying to export a client certificate for use with a web browser. The goal is to restrict access using the <Location directive to the admin area. I have seen numerous tutorials on using self signed CAs. How would you do this using a third party? 1) Do I need to include the CA in the client pfx if it is a trusted root CA? I have seen both examples. Without CA: openssl pkcs12 -export -inkey KEYFILENAME -in CERTFILEFILENAME -out XXX.pfx With CA: openssl pkcs12 -export -in my.crt- inkey my.key -certfile my.bundle -out my.pfx 2) Do I need to still include SSLCACertificateFile for trusted CA in the httpd.conf setup? SSLVerifyClient none SSLCACertificateFile conf/ssl.crt/ca.crt <Location /secure/area> SSLVerifyClient require SSLVerifyDepth 1 </Location> http://www.modssl.org/docs/2.8/ssl_howto.html#ToC8

    Read the article

  • Is sstable2json broken in Cassandra 0.6.0-beta3?

    - by knorv
    I'm getting NullPointerException:s when using sstable2json in Cassandra 0.6.0-beta3: $ bin/sstable2json .../cassandra/data/system/LocationInfo-1-Data.db Exception in thread "main" java.lang.NullPointerException at java.util.Arrays$ArrayList.<init>(Arrays.java:3357) at java.util.Arrays.asList(Arrays.java:3343) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:255) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:299) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:323) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:367) I've had no problems with sstable2json when using Cassandra 0.5. Is sstable2json broken in Cassandra 0.6.0-beta3 or am I doing something wrong?

    Read the article

  • How to add link to flash banner

    - by sasa
    Hello, I am primarily a developer and dont know to use Adobe Flash CS4. Is there a simple way to add link to flash banner. I have .flv file with some items in Library and two layers. Please, give me step by step instructions. Thanks. Edit: I find simple solution, by steps: Go to File - Publish settings and set ActionScript version to ActionScript 2.0 Insert new layer Create a square with Rectangle tool, that is larger than banner Right click on square and chose Convert to symbol... In popup windows chose Type: Button Double click on new button from Library panel and move select from Up to Hit and than go back to main scene Right click on new blue square and chose Actions and paste this code in soruce editor: on (release) { getURL("http://www.example.com/", "_blank"); } Close source editor and export file as movie (File-Export-Export Movie).

    Read the article

  • How to prevent form submission for a form using onchange to submit the form when certain values are

    - by Terrence Brannon
    I have a form I have built: <form class="myform" action="cgi.pl"> <select name="export" onchange='this.form.submit()'> <option value="" selected="selected">Choose an export format</option> <option value="html">HTML</option> <option value="csv">CSV</option> </select> </form> Now, this form works fine if I pull down and select "HTML" or "CSV". But if I hit the back button and select "Choose an export format", the form is submitted, even though I dont want it to be. Is there any way to prevent form submission for that option?

    Read the article

  • How to catch prompts with PySVN?

    - by Geoff
    I'm new to Python and PySVN in general, and I'm trying to export my SVN repository using pysvn. Here's my code: #set up svn login data def svn_credentials (realm, username, may_save): return True, svn_login_name, svn_login_password, False #establish connection svn_client = pysvn.Client () svn_client.callback_get_login = svn_credentials #export data svn_client.export('server-path-goes-here', 'client-path-goes-here', force=True Which works fine, but if the password is wrong or the user name is unknown, this code just sits. I believe it's being presented with a user login prompt on the SVN side, but I'm at a loss as to how to check whats happening with callback_get_login. Any help would be greatly appreciated.

    Read the article

  • How to get the headers for all the pages of the exported data from php to pdf

    - by udaya
    Hi I am exporting data from php page to pdf when the datas exceeed the page limit the header is not available for the consecutive pages function where i call the export to pdf is function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "pdf") { $this->load->library('table'); $this->load->plugin('to_pdf'); $data['countrytoword'] = $this->AddEditmodel1->export(); $this->table->set_heading('Country','State','Town','Name'); $out = $this->table->generate($data['countrytoword']); $html = $this->load->view( 'newpdf',$data, true); pdf_create($html, $cur_date); } } This is my view page from which i export data to pdf Name Country State Town Here I am getting the result as page:1 Name country State Town udaya india Tamilnadu kovai chandru srilanka columbo aaaaa page:2 vivek england gggkj gjgjkj in the page 2 i dont get the headers name, country ,state and town

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >