Search Results

Search found 3181 results on 128 pages for 'listener scan'.

Page 30/128 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • Tipps & Tricks rund um CRSCTL

    - by Sebastian Solbach (DBA Community)
    Egal ob Single Instanz oder für Real Applikation Cluster Datenbanken die Grid Infrastruktur findet man bei immer mehr Systemen im Einsatz. Das liegt sowohl an der vereinfachten Überwachungstätigkeiten für die Oracle Datenbank, Listener und ASM Instanz, als auch an einigen weiterführenden Features, wie der einfachen Service Verwaltung für Single Instanz, DataGuard und/oder RAC. Dabei kommen insbesondere den Cluster Ready Services (CRS), einem Bestandteil der Clusterware Komponente der Grid Infrastruktur, eine wichtige Bedeutung zu, da diese intern alle Ressourcen steuert. Ressourcen können hierbei natürlich nicht nur die Oracle Prozesse (Datenbank, Listener, Virtuelle IP Adressen etc.) sein, sondern auch eigene Applikationen, die unter die Überwachung der Grid Infrastruktur resp. Clusterware gestellt werden. Dies kann von simplen Neustartanforderungen im Single Server Betrieb bis zu klassischen Failover Szenarien in Clusterumgebungen reichen. Diesem Aspekt trägt auch die Tatsache Rechnung, dass es seit einiger Zeit generische Applikations-Agenten (Siebel, Tomcat, GoldenGate, Apache, ...) für die Clusterware gibt und eine abgespeckte GI Installation auf der Oracle eigenen Middleware Hardware (Exalogic) läuft, um die Prozesse zu überwachen. Diese Cluster Ready Services werden vom Befehl "crsctl" gesteuert. Deshalb lohnt es sich dieses Utility mal genauer anzuschauen, zumal es einige Feinheiten enthält, die nicht direkt aus der Dokumentation bzw. Hilfe des Tools ersichtlich sind.

    Read the article

  • Mixing JavaFX, HTML 5, and Bananas with the NetBeans Platform

    - by Geertjan
    The banana in the image below can be dragged. Whenever the banana is dropped, the current date is added to the viewer: What's interesting is that the banana, and the viewer that contains it, is defined in HTML 5, with the help of a JavaScript and CSS file. The HTML 5 file is embedded within the JavaFX browser, while the JavaFX browser is embedded within a NetBeans TopComponent class. The only really interesting thing is how drop events of the banana, which is defined within JavaScript, are communicated back into the Java class. Here's how, i.e., in the Java class, parse the HTML's DOM tree to locate the node of interest and then set a listener on it. (In this particular case, the event listener adds the current date to the InstanceContent which is in the Lookup.) Here's the crucial bit of code: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); final WebEngine webengine = view.getEngine(); URL url = getClass().getResource("home.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); EventTarget banana = (EventTarget) document.getElementById("banana"); banana.addEventListener("click", new MyEventListener(), true); } } }); It seems very weird to me that I need to specify "click" as a string. I actually wanted the drop event, but couldn't figure out what the arbitrary string was for that. Which is exactly why strings suck in this context. Many thanks to Martin Kavuma from the Technical University of Eindhoven, who I met today and who inspired me to go down this interesting trail.

    Read the article

  • How to pass an interface to Java from Unity code?

    - by nickbadal
    First, let me say that this is my first experience with Unity, so the answer may be right under my nose. I've also posted this question on Unity's answers site, but plugin questions don't seem to be as frequently answered there. I'm trying to create a plugin that allows me to access an SDK from my game. I can call SDK methods just fine using AndroidJavaObject and I can pass data to them with no issue. But there are some SDK methods that require an interface to be passed. For example, my Java function: public void attemptLogin(String username, String password, LoginListener listener); Where listener; is a callback interface. I would normally run this code from Java as such: attemptLogin("username", "password", new LoginListener() { @Override public void onSuccess() { //Yay! do some stuff in the game } @Override public void onFailure(int error) { //Uh oh, find out what happened based on error } }); Is there a way to pass a C# interface through JNI/Unity to my attemptLogin function? Or is there a way to create a mimic-ing interface in C# that I can call from inside the Java code (and pass in any kind of parameter)? Thanks in advance! :)

    Read the article

  • Context Sensitive JTable (Part 2)

    - by Geertjan
    Now, having completed part 1, let's add a popup menu to the JTable. However, the menu item in the popup menu should invoke the same Action as invoked from the toolbar button created yesterday. Add this to the constructor created yesterday: Collection<? extends Action> stockActions =         Lookups.forPath("Actions/Stock").lookupAll(Action.class); for (Action action : stockActions) {     popupMenu.add(new JMenuItem(action)); } MouseListener popupListener = new PopupListener(); // Add the listener to the JTable: table.addMouseListener(popupListener); // Add the listener specifically to the header: table.getTableHeader().addMouseListener(popupListener); And here's the standard popup enablement code: private JPopupMenu popupMenu = new JPopupMenu(); class PopupListener extends MouseAdapter { @Override public void mousePressed(MouseEvent e) { showPopup(e); } @Override public void mouseReleased(MouseEvent e) { showPopup(e); } private void showPopup(MouseEvent e) { if (e.isPopupTrigger()) { popupMenu.show(e.getComponent(), e.getX(), e.getY()); } } }

    Read the article

  • Redirect network logs from syslog to another file

    - by w0rldart
    I keep logging way to much info (not needed, for now) in my syslog, and not daily or hourly... but instant. If I want to watch for something in my syslog I just can't because the network log keeps interfering. So, how can I redirect network logs to another file and/or stop logging it? Dec 10 17:01:33 user kernel: [ 8716.000587] MediaState is connected Dec 10 17:01:33 user kernel: [ 8716.000599] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:33 user kernel: [ 8716.000601] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:33 user kernel: [ 8716.000612] rt28xx_get_wireless_stats ---> Dec 10 17:01:33 user kernel: [ 8716.000615] <--- rt28xx_get_wireless_stats Dec 10 17:01:39 user kernel: [ 8722.000714] MediaState is connected Dec 10 17:01:39 user kernel: [ 8722.000729] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:39 user kernel: [ 8722.000732] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:39 user kernel: [ 8722.000747] rt28xx_get_wireless_stats ---> Dec 10 17:01:39 user kernel: [ 8722.000751] <--- rt28xx_get_wireless_stats Dec 10 17:01:44 user kernel: [ 8726.904025] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:01:45 user kernel: [ 8728.003138] MediaState is connected Dec 10 17:01:45 user kernel: [ 8728.003153] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:45 user kernel: [ 8728.003157] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:45 user kernel: [ 8728.003171] rt28xx_get_wireless_stats ---> Dec 10 17:01:45 user kernel: [ 8728.003175] <--- rt28xx_get_wireless_stats Dec 10 17:01:51 user kernel: [ 8734.004066] MediaState is connected Dec 10 17:01:51 user kernel: [ 8734.004079] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:51 user kernel: [ 8734.004082] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:51 user kernel: [ 8734.004096] rt28xx_get_wireless_stats ---> Dec 10 17:01:51 user kernel: [ 8734.004099] <--- rt28xx_get_wireless_stats Dec 10 17:01:57 user kernel: [ 8740.004108] MediaState is connected Dec 10 17:01:57 user kernel: [ 8740.004119] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:57 user kernel: [ 8740.004121] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:57 user kernel: [ 8740.004132] rt28xx_get_wireless_stats ---> Dec 10 17:01:57 user kernel: [ 8740.004135] <--- rt28xx_get_wireless_stats Dec 10 17:01:57 user kernel: [ 8740.436021] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:03 user kernel: [ 8746.005280] MediaState is connected Dec 10 17:02:03 user kernel: [ 8746.005294] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:03 user kernel: [ 8746.005298] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:03 user kernel: [ 8746.005312] rt28xx_get_wireless_stats ---> Dec 10 17:02:03 user kernel: [ 8746.005315] <--- rt28xx_get_wireless_stats Dec 10 17:02:09 user kernel: [ 8752.004790] MediaState is connected Dec 10 17:02:09 user kernel: [ 8752.004804] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:09 user kernel: [ 8752.004808] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:09 user kernel: [ 8752.004821] rt28xx_get_wireless_stats ---> Dec 10 17:02:09 user kernel: [ 8752.004825] <--- rt28xx_get_wireless_stats Dec 10 17:02:15 user kernel: [ 8757.984031] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:15 user kernel: [ 8758.004078] MediaState is connected Dec 10 17:02:15 user kernel: [ 8758.004094] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:15 user kernel: [ 8758.004097] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:15 user kernel: [ 8758.004112] rt28xx_get_wireless_stats ---> Dec 10 17:02:15 user kernel: [ 8758.004116] <--- rt28xx_get_wireless_stats Dec 10 17:02:16 user kernel: [ 8759.492017] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:19 user kernel: [ 8762.002179] SCANNING, suspend MSDU transmission ... Dec 10 17:02:19 user kernel: [ 8762.004291] MlmeScanReqAction -- Send PSM Data frame for off channel RM, SCAN_IN_PROGRESS=1! Dec 10 17:02:19 user kernel: [ 8762.025055] SYNC - BBP R4 to 20MHz.l Dec 10 17:02:19 user kernel: [ 8762.027249] RT35xx: SwitchChannel#1(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF1, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.170206] RT35xx: SwitchChannel#2(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF1, K=0x07, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.318211] RT35xx: SwitchChannel#3(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF2, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.462269] RT35xx: SwitchChannel#4(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF2, K=0x07, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.606229] RT35xx: SwitchChannel#5(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF3, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.750202] RT35xx: SwitchChannel#6(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF3, K=0x07, R=0x02 Dec 10 17:02:20 user kernel: [ 8762.894217] RT35xx: SwitchChannel#7(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF4, K=0x02, R=0x02 Dec 10 17:02:20 user kernel: [ 8763.038202] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:20 user kernel: [ 8763.040194] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:20 user kernel: [ 8763.040199] BAR(1) : Tid (0) - 03a3:037e Dec 10 17:02:20 user kernel: [ 8763.040387] SYNC - End of SCAN, restore to channel 11, Total BSS[03] Dec 10 17:02:20 user kernel: [ 8763.040400] ScanNextChannel -- Send PSM Data frame Dec 10 17:02:20 user kernel: [ 8763.040402] bFastRoamingScan ~~~~~~~~~~~~~ Get back to send data ~~~~~~~~~~~~~ Dec 10 17:02:20 user kernel: [ 8763.040405] SCAN done, resume MSDU transmission ... Dec 10 17:02:20 user kernel: [ 8763.047022] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:20 user kernel: [ 8763.047026] BAR(1) : Tid (0) - 03a3:03a5 Dec 10 17:02:21 user kernel: [ 8763.898130] bImprovedScan ............. Resume for bImprovedScan, SCAN_PENDING .............. Dec 10 17:02:21 user kernel: [ 8763.898143] SCANNING, suspend MSDU transmission ... Dec 10 17:02:21 user kernel: [ 8763.900245] MlmeScanReqAction -- Send PSM Data frame for off channel RM, SCAN_IN_PROGRESS=1! Dec 10 17:02:21 user kernel: [ 8763.921144] SYNC - BBP R4 to 20MHz.l Dec 10 17:02:21 user kernel: [ 8763.923339] RT35xx: SwitchChannel#8(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF4, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8763.996019] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:21 user kernel: [ 8764.066221] RT35xx: SwitchChannel#9(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF5, K=0x02, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.210212] RT35xx: SwitchChannel#10(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF5, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.215536] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.215542] BAR(1) : Tid (0) - 0457:0452 Dec 10 17:02:21 user kernel: [ 8764.244000] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.244004] BAR(1) : Tid (0) - 0459:0456 Dec 10 17:02:21 user kernel: [ 8764.253019] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.253023] BAR(1) : Tid (0) - 045c:0458 Dec 10 17:02:21 user kernel: [ 8764.256677] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.256681] BAR(1) : Tid (0) - 045c:045b Dec 10 17:02:21 user kernel: [ 8764.259785] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.259788] BAR(1) : Tid (0) - 045d:045b Dec 10 17:02:21 user kernel: [ 8764.280467] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.280471] BAR(1) : Tid (0) - 045f:045c Dec 10 17:02:21 user kernel: [ 8764.282189] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.282192] BAR(1) : Tid (0) - 045f:045e Dec 10 17:02:21 user kernel: [ 8764.354204] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.356408] ScanNextChannel():Send PWA NullData frame to notify the associated AP! Dec 10 17:02:21 user kernel: [ 8764.498202] RT35xx: SwitchChannel#12(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.642210] RT35xx: SwitchChannel#13(RF=8, Pwr0=30, Pwr1=28, 2T), N=0xF7, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.790229] RT35xx: SwitchChannel#14(RF=8, Pwr0=30, Pwr1=28, 2T), N=0xF8, K=0x04, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.934238] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.935243] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:22 user kernel: [ 8764.935249] BAR(1) : Tid (0) - 048e:0485 Dec 10 17:02:22 user kernel: [ 8764.936423] SYNC - End of SCAN, restore to channel 11, Total BSS[05] Dec 10 17:02:22 user kernel: [ 8764.936436] ScanNextChannel -- Send PSM Data frame Dec 10 17:02:22 user kernel: [ 8764.936440] SCAN done, resume MSDU transmission ... Dec 10 17:02:22 user kernel: [ 8764.940529] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.942178] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:22 user kernel: [ 8764.942182] BAR(1) : Tid (0) - 0493:048e Dec 10 17:02:22 user kernel: [ 8764.942715] CNTL - All roaming failed, restore to channel 11, Total BSS[05] Dec 10 17:02:22 user kernel: [ 8764.948016] MMCHK - No BEACON. restore R66 to the low bound(56) Dec 10 17:02:22 user kernel: [ 8764.948307] ===>rt_ioctl_giwscan. 5(5) BSS returned, data->length = 1111 Dec 10 17:02:23 user kernel: [ 8766.048073] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:23 user kernel: [ 8766.552034] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:27 user kernel: [ 8770.001180] MediaState is connected Dec 10 17:02:27 user kernel: [ 8770.001197] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:27 user kernel: [ 8770.001201] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:27 user kernel: [ 8770.001219] rt28xx_get_wireless_stats ---> Dec 10 17:02:27 user kernel: [ 8770.001223] <--- rt28xx_get_wireless_stats Dec 10 17:02:28 user kernel: [ 8771.564020] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:29 user kernel: [ 8772.064031] QuickDRS: TxTotalCnt <= 15, train back to original rate

    Read the article

  • How to archive data from a table to a local or remote database in SQL 2005 and SQL 2008

    - by simonsabin
    Often you have the need to archive data from a table. This leads to a number of challenges 1. How can you do it without impacting users 2. How can I make it transactionally consistent, i.e. the data I put in the archive is the data I remove from the main table 3. How can I get it to perform well Points 1 is very much tied to point 3. If it doesn't perform well then the delete of data is going to cause lots of locks and thus potentially blocking. For points 1 and 3 refer to my previous posts DELETE-TOP-x-rows-avoiding-a-table-scan and UPDATE-and-DELETE-TOP-and-ORDER-BY---Part2. In essence you need to be removing small chunks of data from your table and you want to do that avoiding a table scan. So that deals with the delete approach but archiving is about inserting that data somewhere else. Well in SQL 2008 they introduced a new feature INSERT over DML (Data Manipulation Language, i.e. SQL statements that change data), or composable DML. The ability to nest DML statements within themselves, so you can past the results of an insert to an update to a merge. I've mentioned this before here SQL-Server-2008---MERGE-and-optimistic-concurrency. This feature is currently limited to being able to consume the results of a DML statement in an INSERT statement. There are many restrictions which you can find here http://msdn.microsoft.com/en-us/library/ms177564.aspx look for the section "Inserting Data Returned From an OUTPUT Clause Into a Table" Even with the restrictions what we can do is consume the OUTPUT from a DELETE and INSERT the results into a table in another database. Note that in BOL it refers to not being able to use a remote table, remote means a table on another SQL instance. To show this working use this SQL to setup two databases foo and fooArchive create database foo go --create the source table fred in database foo select * into foo..fred from sys.objects go create database fooArchive go if object_id('fredarchive',DB_ID('fooArchive')) is null begin     select getdate() ArchiveDate,* into fooArchive..FredArchive from sys.objects where 1=2       end go And then we can use this simple statement to archive the data insert into fooArchive..FredArchive select getdate(),d.* from (delete top (1)         from foo..Fred         output deleted.*) d         go In this statement the delete can be any delete statement you wish so if you are deleting by ids or a range of values then you can do that. Refer to the DELETE-TOP-x-rows-avoiding-a-table-scan post to ensure that your delete is going to perform. The last thing you want to do is to perform 100 deletes each with 5000 records for each of those deletes to do a table scan. For a solution that works for SQL2005 or if you want to archive to a different server then you can use linked servers or SSIS. This example shows how to do it with linked servers. [ONARC-LAP03] is the source server. begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d commit transaction and to prove the transactions work try, you should get the same number of records before and after. select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d rollback transaction   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive The transactions are very important with this solution. Look what happens when you don't have transactions and an error occurs   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*                     raiserror (''Oh doo doo'',15,15)') d                     select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive Before running this think what the result would be. I got it wrong. What seems to happen is that the remote query is executed as a transaction, the error causes that to rollback. However the results have already been sent to the client and so get inserted into the

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • Schizophrenic Ubuntu 12.10-12.04: Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to getit to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe

    Read the article

  • SQL SERVER – How to Get SQL Server Restart Notification?

    - by Pinal Dave
    Few days back my friend called me to know if there is any tool which can be used to get restart notification about SQL in their environment. I told that SQL Server can do it by itself with some configurations. He was happy and surprised to know that he need not spend any extra money. In SQL Server, we can configure stored procedure(s) to run at start-up of SQL Server. This blog would give steps to achieve how to achieve it. There are many situations where this feature can be used. Below are few. Logging SQL Server startup timings Modify data in some table during startup (i.e. table in tempdb) Sending notification about SQL start. Step 1 – Enable ‘scan for startup procs’ This can be done either using T-SQL or User Interface of Management Studio. EXEC sys.sp_configure N'Show Advanced Options', N'1' GO RECONFIGURE WITH OVERRIDE GO EXEC sys.sp_configure N'scan for startup procs', N'1' GO RECONFIGURE WITH OVERRIDE GO Below is the interface to change the setting. We need to go to “Server” > “Properties” and use “Advanced” tab. “Scan for Startup Procs” is the parameter under “Miscellaneous” section as shown below. We need to make value as “True” and hit OK. Step 2 – Create stored procedure It’s important to note that the procedure is executed after recovery is finished for ALL databases. Here is a sample stored procedure. You can use your own logic in the procedure. CREATE PROCEDURE SQLStartupProc AS BEGIN CREATE TABLE ##ThisTableShouldAlwaysExists (AnyColumn INT) END Step 3 – Set Procedure to run at startup We need to use sp_procoption to mark the procedure to run at startup. Here is the code to let SQL know that this is startup proc. sp_procoption 'SQLStartupProc', 'startup', 'true' This can be used only for procedures in master database. Msg 15398, Level 11, State 1, Procedure sp_procoption, Line 89 Only objects in the master database owned by dbo can have the startup setting changed. We also need to remember that such procedure should not have any input/output parameter. Here is the error which would be raised. Msg 15399, Level 11, State 1, Procedure sp_procoption, Line 107 Could not change startup option because this option is restricted to objects that have no parameters. Verification Here is the query to find which procedures is marked as startup procedures. SELECT name FROM sys.objects WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1 Once this is done, I have restarted SQL instance and here is what we would see in SQL ERRORLOG Launched startup procedure 'SQLStartupProc'. This confirms that stored procedure is executed. You can also notice that this is done after all databases are recovered. Recovery is complete. This is an informational message only. No user action is required. After few days my friend again called me and asked – I want to turn this OFF? Use comments section and post the answer for him.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Design problem with callback functions in android

    - by Franz Xaver
    Hi folks! I'm currently developing an app in android that is accessing wifi values, that is, the application needs to scan for all access point and their specific signal strengths. I know that I have to extend the class BroadcastReceiver overwriting the method BroadcastReceiver.onReceive(Context context, Intent intent) which is called when the values are ready. Perhaps there exist solutions provided by the android system itself but I'm relatively new to android so I could need some help. The problem I encountered is that I got one class (an activity, thus controlled by the user) that needs this scan results for two different things (first to save the values in a database or second, to use them for further calculations but not both at one moment!) So how to design the callback system in order to "transport" the scan results from onReceive(Context context, Intent intent) to the operation intended by the user? My first solution was to define enums for each use case (save or use for calculations) which wlan-interested classes have to submit when querying for the values. But that would force the BroadcastReceiverextending class to save the current enum and use it as a parameter in the callback function of the querying class (this querying class needs to know what it asked for when getting backcalled) But that seems to me kind of dirty ;) So anyone a good idea for this?

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Is there a way in VS2008 (c#) to see all the possible exception types that can originate from a meth

    - by Matt
    Is there a way in VS2008 IDE for c# to see all the possible exception types that can possibly originate from a method call or even for an entire try-catch block? I know that intellisense or the object browser tells me this method can throw these types of exceptions but is there another way than using the object browser everytime? Something more accessible when coding? Furthermore, I don't think intellisense or the object browser do anything more than read the XML code comments. Shouldn't it be possible to scan a class's source and find all the exception types that can be thrown. (Forget path-ing based on method input, just scan the code for exception types) Am I wrong? Extending this idea, you should be able to hover over the 'try' or 'catch' keywords and present a tooltip with all the types of exceptions that can be thrown. My question boils down to, does a VS2008 add on like this exist? Does VS2010 do this perhaps? If not, could you implement it the way I've described, by scanning the class code for thrown exception types and would people find it useful. Exceptions bubble up so you have to scan every bit of code every method call, which I guess could be impractical, though I suppose you could build an index the first time and increase your speed that way. (It might be a cool little project....)

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • multiple mysql_real_query() in while loop

    - by Steve
    It seems that when I have one mysql_real_query() function in a continuous while loop, the query will get executed OK. However, if multiple mysql_real_query() are inside the while loop, one right after the other. Depending on the query, sometimes neither the first query nor second query will execute properly. This seems like a threading issue to me. I'm wondering if the mysql c api has a way of dealing with this? Does anyone know how to deal with this? mysql_free_result() doesn't work since I am not even storing the results. //keep polling as long as stop character '-' is not read while(szRxChar != '-') { // Check if a read is outstanding if (HasOverlappedIoCompleted(&ovRead)) { // Issue a serial port read if (!ReadFile(hSerial,&szRxChar,1, &dwBytesRead,&ovRead)) { DWORD dwErr = GetLastError(); if (dwErr!=ERROR_IO_PENDING) return dwErr; } } // Wait 5 seconds for serial input if (!(HasOverlappedIoCompleted(&ovRead))) { WaitForSingleObject(hReadEvent,RESET_TIME); } // Check if serial input has arrived if (GetOverlappedResult(hSerial,&ovRead, &dwBytesRead,FALSE)) { // Wait for the write GetOverlappedResult(hSerial,&ovWrite, &dwBytesWritten,TRUE); //load tagBuffer with byte stream tagBuffer[i] = szRxChar; i++; tagBuffer[i] = 0; //char arrays are \0 terminated //run query with tagBuffer if( strlen(tagBuffer)==PACKET_LENGTH ) { sprintf(query,"insert into scan (rfidnum) values ('"); strcat(query, tagBuffer); strcat(query, "')"); mysql_real_query(&mysql,query,(unsigned int)strlen(query)); i=0; } mysql_real_query(&mysql,"insert into scan (rfidnum) values ('2nd query')",(unsigned int)strlen("insert into scan (rfid) values ('2nd query')")); mysql_free_result(res); } }

    Read the article

  • Pass NSURL from One Class To Another

    - by user717452
    In my appDelegate in didFinishLaunchingWithOptions, I have the following: NSURL *url = [NSURL URLWithString:@"http://www.thejenkinsinstitute.com/Journal/"]; NSString *content = [NSString stringWithContentsOfURL:url]; NSString * aString = content; NSMutableArray *substrings = [NSMutableArray new]; NSScanner *scanner = [NSScanner scannerWithString:aString]; [scanner scanUpToString:@"<p>To Download the PDF, " intoString:nil]; // Scan all characters before # while(![scanner isAtEnd]) { NSString *substring = nil; [scanner scanString:@"<p>To Download the PDF, <a href=\"" intoString:nil]; // Scan the # character if([scanner scanUpToString:@"\"" intoString:&substring]) { // If the space immediately followed the #, this will be skipped [substrings addObject:substring]; } [scanner scanUpToString:@"" intoString:nil]; // Scan all characters before next # } // do something with substrings NSString *URLstring = [substrings objectAtIndex:0]; self.theheurl = [NSURL URLWithString:URLstring]; NSLog(@"%@", theheurl); [substrings release]; The console printout for theheurl gives me a valid URL ending in .pdf. In the class I would like to load the URL, I have the following: - (void)viewWillAppear:(BOOL)animated { _appdelegate.theheurl = currentURL; NSLog(@"%@", currentURL); NSLog(@"%@", _appdelegate.theheurl); [worship loadRequest:[NSURLRequest requestWithURL:currentURL cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:60.0]]; timer = [NSTimer scheduledTimerWithTimeInterval:(1.0/2.0) target:self selector:@selector(tick) userInfo:nil repeats:YES]; [super viewWillAppear:YES]; } However, both NSLogs in that class come back null. What am I Doing wrong in getting the NSURL from the AppDelegate to the class to load it?

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Securing an ADF Application using OES11g: Part 2

    - by user12587121
    To validate the integration with OES we need a sample ADF Application that is rich enough to allow us to test securing the various ADF elements.  To achieve this we can add some items including bounded task flows to the application developed in this tutorial. A sample JDeveloper 11.1.1.6 project is available here. It depends on the Fusion Order Demo (FOD) database schema which is easily created using the FOD build scripts.In the deployment we have chosen to enable only ADF Authentication as we will delegate Authorization, mostly, to OES.The welcome page of the application with all the links exposed looks as follows: The Welcome, Browse Products, Browse Stock and System Administration links go to pages while the Supplier Registration and Update Stock are bounded task flows.  The Login link goes to a basic login page and once logged in a link is presented that goes to a logout page.  Only the Browse Products and Browse Stock pages are really connected to the database--the other pages and task flows do not really perform any operations on the database. Required Security Policies We make use of a set of test users and roles as decscribed on the welcome page of the application.  In order to exercise the different authorization possibilities we would like to enforce the following sample policies: Anonymous users can see the Login, Welcome and Supplier Registration links. They can also see the Welcome page, the Login page and follow the Supplier Registration task flow.  They can see the icon adjacent to the Login link indicating whether they have logged in or not. Authenticated users can see the Browse Product page. Only staff granted the right can see the Browse Product page cost price value returned from the database and then only if the value is below a configurable limit. Suppliers and staff can see the Browse Stock links and pages.  Customers cannot. Suppliers can see the Update Stock link but only those with the update permission are allowed to follow the task flow that it launches.  We could hide the link but leave it exposed here so we can easily demonstrate the method call activity protecting the task flow. Only staff granted the right can see the System Administration link and the System Administration page it accesses. Implementing the required policies In order to secure the application we will make use of the following techniques: EL Expressions and Java backing beans: JSF has the notion of EL expressions to reference data from backing Java classes.  We use these to control the presentation of links on the navigation page which respect the security contraints.  So a user will not see links that he is not allowed to click on into. These Java backing beans can call on to OES for an authorization decision.  Important Note: naturally we would configure the WLS domain where our ADF application is running as an OES WLS SM, which would allow us to efficiently query OES over the PEP API.  However versioning conflicts between OES 11.1.1.5 and ADF 11.1.1.6 mean that this is not possible.  Nevertheless, we can make use of the OES RESTful gateway technique from this posting in order to call into OES. You can easily create and manage backing beans in Jdeveloper as follows: Custom ADF Phase Listener: ADF extends the JSF page lifecycle flow and allows one to hook into the flow to intercept page rendering.  We use this to put a check prior to rendering any protected pages, again calling on to OES via the backing bean.  Phase listeners are configured in the adf-settings.xml file.  See the MyPageListener.java class in the project.  Here, for example,  is the code we use in the listener to check for allowed access to the sysadmin page, navigating back to the welcome page if authorization is not granted:                         if (page != null && (page.equals("/system.jspx") || page.equals("/system"))){                             System.out.println("MyPageListener: Checking Authorization for /system");                             if (getValue("#{oesBackingBean.UIAccessSysAdmin}").toString().equals("false") ){                                   System.out.println("MyPageListener: Forcing navigation away from system" +                                       "to welcome");                                 NavigationHandler nh = fc.getApplication().getNavigationHandler();                                   nh.handleNavigation(fc, null, "welcome");                               } else {                                 System.out.println("MyPageListener: access allowed");                              }                         } Method call activity: our app makes use of bounded task flows to implement the sequence of pages that update the stock or allow suppliers to self register.  ADF takes care of ensuring that a bounded task flow can be entered by only one page.  So a way to protect all those pages is to make a call to OES in the first activity and then either exit the task flow or continue depending on the authorization decision.  The method call returns a String which contains the name of the transition to effect. This is where we configure the method call activity in JDeveloper: We implement each of the policies using the above techniques as follows: Policies 1 and 2: as these policies concern the coarse grained notions of controlling access to anonymous and authenticated users we can make use of the container’s security constraints which can be defined in the web.xml file.  The allPages constraint is added automatically when we configure Authentication for the ADF application.  We have added the “anonymousss” constraint to allow access to the the required pages, task flows and icons: <security-constraint>    <web-resource-collection>      <web-resource-name>anonymousss</web-resource-name>      <url-pattern>/faces/welcome</url-pattern>      <url-pattern>/afr/*</url-pattern>      <url-pattern>/adf/*</url-pattern>      <url-pattern>/key.png</url-pattern>      <url-pattern>/faces/supplier-reg-btf/*</url-pattern>      <url-pattern>/faces/supplier_register_complete</url-pattern>    </web-resource-collection>  </security-constraint> Policy 3: we can place an EL expression on the element representing the cost price on the products.jspx page: #{oesBackingBean.dataAccessCostPrice}. This EL Expression references a method in a Java backing bean that will call on to OES for an authorization decision.  In OES we model the authorization requirement by requiring the view permission on the resource /MyADFApp/data/costprice and granting it only to the staff application role.  We recover any obligations to determine the limit.  Policy 4: is implemented by putting an EL expression on the Browse Stock link #{oesBackingBean.UIAccessBrowseStock} which checks for the view permission on the /MyADFApp/ui/stock resource. The stock.jspx page is protected by checking for the same permission in a custom phase listener—if the required permission is not satisfied then we force navigation back to the welcome page. Policy 5: the Update Stock link is protected with the same EL expression as the Browse Link: #{oesBackingBean.UIAccessBrowseStock}.  However the Update Stock link launches a bounded task flow and to protect it the first activity in the flow is a method call activity which will execute an EL expression #{oesBackingBean.isUIAccessSupplierUpdateTransition}  to check for the update permission on the /MyADFApp/ui/stock resource and either transition to the next step in the flow or terminate the flow with an authorization error. Policy 6: the System Administration link is protected with an EL Expression #{oesBackingBean.UIAccessSysAdmin} that checks for view access on the /MyADF/ui/sysadmin resource.  The system page is protected in the same way at the stock page—the custom phase listener checks for the same permission that protects the link and if not satisfied we navigate back to the welcome page. Testing the Application To test the application: deploy the OES11g Admin to a WLS domain deploy the OES gateway in a another domain configured to be a WLS SM. You must ensure that the jps-config.xml file therein is configured to allow access to the identity store, otherwise the gateway will not b eable to resolve the principals for the requested users.  To do this ensure that the following elements appear in the jps-config.xml file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> download the sample application and change the URL to the gateway in the MyADFApp OESBackingBean code to point to the OES Gateway and deploy the application to an 11.1.1.6 WLS domain that has been extended with the ADF JRF files. You will need to configure the FOD database connection to point your database which contains the FOD schema. populate the OES Admin and OES Gateway WLS LDAP stores with the sample set of users and groups.  If  you have configured the WLS domains to point to the same LDAP then it would only have to be done once.  To help with this there is a directory called ldap_scripts in the sample project with ldif files for the test users and groups. start the OES Admin console and configure the required OES authorization policies for the MyADFApp application and push them to the WLS SM containing the OES Gateway. Login to the MyADFApp as each of the users described on the login page to test that the security policy is correct. You will see informative logging from the OES Gateway and the ADF application to their respective WLS consoles. Congratulations, you may now login to the OES Admin console and change policies that will control the behaviour of your ADF application--change the limit value in the obligation for the cost price for example, or define Role Mapping policies to determine staff access to the system administration page based on user profile attributes. ADF Development Notes Some notes on ADF development which are probably typical gotchas: May need this on WLS startup in order to allow us to overwrite credentials for the database, the signal here is that there is an error trying to access the data base: -Djps.app.credential.overwrite.allowed=true Best to call Bounded Task flows via a CommandLink (as opposed to a go link) as you cannot seem to start them again from a go link, even having completed the task flow correctly with a return activity. Once a bounded task flow (BTF) is initated it must complete correctly  via a return activity—attempting to click on any other link whilst in the context of a  BTF has no effect.  See here for example: When using the ADF Authentication only security approach it seems to be awkward to allow anonymous access to the welcome and registration pages.  We can achieve anonymous access using the web.xml security constraint shown above (where no auth-constraint is specified) however it is not clear what needs to be listed in there….for example the /afr/* and /adf/* are in there by trial and error as sometimes the welcome page will not render if we omit those items.  I was not able to use the default allPages constraint with for example the anonymous-role or the everyone WLS group in order to be able to allow anonymous access to pages. The ADF security best practice advises placing all pages under the public_html/WEB-INF folder as then ADF will not allow any direct access to the .jspx pages but will only allow acces via a link of the form /faces/welcome rather than /faces/welcome.jspx.  This seems like a very good practice to follow as having multiple entry points to data is a source of confusion in a web application (particulary from a security point of view). In Authentication+Authorization mode only pages with a Page definition file are protected.  In order to add an emty one right click on the page and choose Go to Page Definition.  This will create an empty page definition and now the page will require explicit permission to be seen. It is advisable to give a unique context root via the weblogic.xml for the application, as otherwise the application will clash with any other application with the same context root and it will not deploy

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >