Search Results

Search found 17378 results on 696 pages for 'remote x session'.

Page 210/696 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • How can i find and take in my local machine any file in remote machine?

    - by programmerist
    i try to write any codes. For example i have 2 machine: A computer, B computer. My local machine A computer. i also have sql data base . with "select * from" i can learn PatientID. ForExample PatientID :123456. But this patient's pictures file in B Computer.Picture File Name is equal to PatientID. B Computer ports is open for me. I can listen B Computer's 51124 port. How can i get Files from B machine via Port(But this files includes pics ). But this is Windows App. But i don't know files path. My program must find it automatically

    Read the article

  • Is it possible to access a remote SWF without needing to download it?

    - by Trap
    We have a very large SWF hosted in a website which is just a repository of hundreds of movies/symbols. We load this SWF at the beginning with the Loader class but then the client will access a very few of them at that moment. Is it possible to tell the loader to download symbols only when they're needed? (as with applicationDomain.getDefinition(...)). Thanks in advance.

    Read the article

  • PHP - login to a remote server, trough my own server, with HTTPS, cookies and proxy, and downloading the html

    - by Yunga Mohani
    Hello, so what i am trying to do is this: login to the other server with a PHP on my own server (either with my username and pass/or with my cookies) then have access to the page i want to display/download i want to write a PHP script that is located on my own server, that automatically does a login to another server, that uses HTTPS and a web form for login. after the login i have access to that page that i am trying to download. i dont know if it would be possible to login and download the html only with the cookies that i have in my browser through a previous login, or if i need to do the login in my php script through some https login method. can i do any of this with curl or fsocksopen or what would be the best way to realize this? thanks in advance!

    Read the article

  • Know more about Cache Buffer Handle

    - by Liu Maclean(???)
    ??????«latch free:cache buffer handles???SQL????»?????cache buffer handle latch?????,?????????: “?????pin?buffer header???????buffer handle,??buffer handle?????????cache buffer handles?,??????cache buffer handles??????,???????cache???buffer handles,?????(reserved set)?????????????_db_handles_cached(???5)???,?????????????????SQL??????????????????????,????pin??????,????????handle,?????????5?cached buffer handles???handle????????????????,Oracle?????????????????pin?”????“?buffer,????????????????handle???db_block_buffers/processes,????_cursor_db_buffers_pinned???????cache buffer handles?????,??????,????????????SQL,????cache?buffer handles?????????,??????????????,???????????/?????” ????T.ASKMACLEAN.COM????,??????cache Buffer handle?????: cache buffer handle ??: ------------------------------ | Buffer state object | ------------------------------ | Place to hang the buffer | ------------------------------ | Consistent Get? | ------------------------------ | Proc Owning SO | ------------------------------ | Flags(RIR) | ------------------------------ ???? cache buffer handle SO: 70000046fdfe530, type: 24, owner: 70000041b018630, flag: INIT/-/-/0×00(buffer) (CR) PR: 70000048e92d148 FLG: 0×500000lock rls: 0, class bit: 0kcbbfbp: [BH: 7000001c7f069b0, LINK: 70000046fdfe570]where: kdswh02: kdsgrp, why: 0BH (7000001c7f069b0) file#: 12 rdba: 0×03061612 (12/398866) class: 1 ba: 7000001c70ee000set: 75 blksize: 8192 bsi: 0 set-flg: 0 pwbcnt: 0dbwrid: 2 obj: 66209 objn: 48710 tsn: 6 afn: 12hash: [700000485f12138,700000485f12138] lru: [70000025af67790,700000132f69ee0]lru-flags: hot_bufferckptq: [NULL] fileq: [NULL] objq: [700000114f5dd10,70000028bf5d620]use: [70000046fdfe570,70000046fdfe570] wait: [NULL]st: SCURRENT md: SHR tch: 0flags: affinity_lockLRBA: [0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [65535]where: kdswh02: kdsgrp, why: 0 # Example:#   (buffer) (CR) PR: 37290 FLG:    0#   kcbbfbp    : [BH: befd8, LINK: 7836c] (WAITING) Buffer handle (X$KCBBF) kernel cache, buffer buffer_handles Query x$kcbbf  – lists all the buffer handles ???? _db_handles             System-wide simultaneous buffer operations ,no of buffer handles_db_handles_cached      Buffer handles cached each process , no of processes  default 5_cursor_db_buffers_pinned  additional number of buffers a cursor can pin at once_session_kept_cursor_pins       Number of cursors pins to keep in a session When a buffer is pinned it is attached to buffer state object. ??? ???????? cache buffer handles latch ? buffer pin???: SESSION A : SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> create table test_cbc_handle(t1 int); Table created. SQL> insert into test_cbc_handle values(1); 1 row created. SQL> commit; Commit complete. SQL> select rowid from test_cbc_handle; ROWID ------------------ AAANO6AABAAAQZSAAA SQL> select * from test_cbc_handle where rowid='AAANO6AABAAAQZSAAA';         T1 ----------          1 SQL> select addr,name from v$latch_parent where name='cache buffer handles'; ADDR             NAME ---------------- -------------------------------------------------- 00000000600140A8 cache buffer handles SQL> select to_number('00000000600140A8','xxxxxxxxxxxxxxxxxxxx') from dual; TO_NUMBER('00000000600140A8','XXXXXXXXXXXXXXXXXXXX') ----------------------------------------------------                                           1610694824 ??cache buffer handles????parent latch ??? child latch ???SESSION A hold ??????cache buffer handles parent latch ???? oradebug call kslgetl ??, kslgetl?oracle??get latch??? SQL> oradebug setmypid; Statement processed. SQL> oradebug call kslgetl 1610694824 1; Function returned 1 ?????SESSION B ???: SQL> select * from v$latchholder;        PID        SID LADDR            NAME                                                                   GETS ---------- ---------- ---------------- ---------------------------------------------------------------- ----------         15        141 00000000600140A8 cache buffer handles                                                    119 cache buffer handles latch ???session A hold??,????????acquire cache buffer handle latch SQL> select * from test_cbc_handle where rowid='AAANO6AABAAAQZSAAA';         T1 ----------          1 ?????Server Process?????? read buffer, ????????"_db_handles_cached", ??process?cache 5? cache buffer handle ??"_db_handles_cached"=0,?process????5????cache buffer handle , ???? process ???pin buffer,???hold cache buffer handle latch??????cache buffer handle SQL> alter system set "_db_handles_cached"=0 scope=spfile; System altered. ????? shutdown immediate; startup; session A: SQL> oradebug setmypid; Statement processed. SQL> oradebug call kslgetl 1610694824 1; Function returned 1 session B: select * from test_cbc_handle where rowid='AAANO6AABAAAQZSAAA'; session B hang!! WHY? SQL> oradebug setmypid; Statement processed. SQL> oradebug dump systemstate 266; Statement processed.   SO: 0x11b30b7b0, type: 2, owner: (nil), flag: INIT/-/-/0x00   (process) Oracle pid=22, calls cur/top: (nil)/0x11b453c38, flag: (0) -             int error: 0, call error: 0, sess error: 0, txn error 0   (post info) last post received: 0 0 0               last post received-location: No post               last process to post me: none               last post sent: 0 0 0               last post sent-location: No post               last process posted by me: none     (latch info) wait_event=0 bits=8       holding    (efd=4) 600140a8 cache buffer handles level=3   SO: 0x11b305810, type: 2, owner: (nil), flag: INIT/-/-/0x00   (process) Oracle pid=10, calls cur/top: 0x11b455ac0/0x11b450a58, flag: (0) -             int error: 0, call error: 0, sess error: 0, txn error 0   (post info) last post received: 0 0 0               last post received-location: No post               last process to post me: none               last post sent: 0 0 0               last post sent-location: No post               last process posted by me: none     (latch info) wait_event=0 bits=2         Location from where call was made: kcbzgs:       waiting for 600140a8 cache buffer handles level=3 FBD93353:000019F0    10   162 10005   1 KSL WAIT BEG [latch: cache buffer handles] 1610694824/0x600140a8 125/0x7d 0/0x0 FF936584:00002761    10   144 10005   1 KSL WAIT BEG [latch: cache buffer handles] 1610694824/0x600140a8 125/0x7d 0/0x0 PID=22 holding ??cache buffer handles latch PID=10 ?? cache buffer handles latch, ????"_db_handles_cached"=0 ?? process??????cache buffer handles ??systemstate???? kcbbfbp cache buffer handle??, ?? "_db_handles_cached"=0 ? cache buffer handles latch?hold ?? ????cache buffer handles latch , ??? buffer?pin?????????? session A exit session B: SQL> select * from v$latchholder; no rows selected SQL> insert into test_cbc_handle values(2); 1 row created. SQL> commit; Commit complete. SQL> SQL> select t1,rowid from test_cbc_handle;         T1 ROWID ---------- ------------------          1 AAANPAAABAAAQZSAAA          2 AAANPAAABAAAQZSAAB SQL> select spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); SPID                PID ------------ ---------- 19251                10 ? GDB ? SPID=19215 ?debug , ?? kcbrls ????breakpoint ??? ????release buffer [oracle@vrh8 ~]$ gdb $ORACLE_HOME/bin/oracle 19251 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-37.el5) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.  Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /s01/oracle/product/10.2.0.5/db_1/bin/oracle...(no debugging symbols found)...done. Attaching to program: /s01/oracle/product/10.2.0.5/db_1/bin/oracle, process 19251 Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libskgxp10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libskgxp10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libhasgen10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libhasgen10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libskgxn2.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libskgxn2.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libocr10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libocr10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libocrb10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libocrb10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libocrutl10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libocrutl10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libjox10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libjox10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libclsra10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libclsra10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libdbcfg10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libdbcfg10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libnnz10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libnnz10.so Reading symbols from /usr/lib64/libaio.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libaio.so.1 Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libm.so.6 Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libnsl.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libnsl.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 0x00000035c000d940 in __read_nocancel () from /lib64/libpthread.so.0 (gdb) break kcbrls Breakpoint 1 at 0x10e5d24 session B: select * from test_cbc_handle where rowid='AAANPAAABAAAQZSAAA'; select hang !! GDB (gdb) c Continuing. Breakpoint 1, 0x00000000010e5d24 in kcbrls () (gdb) bt #0  0x00000000010e5d24 in kcbrls () #1  0x0000000002e87d25 in qertbFetchByUserRowID () #2  0x00000000030c62b8 in opifch2 () #3  0x00000000032327f0 in kpoal8 () #4  0x00000000013b7c10 in opiodr () #5  0x0000000003c3c9da in ttcpip () #6  0x00000000013b3144 in opitsk () #7  0x00000000013b60ec in opiino () #8  0x00000000013b7c10 in opiodr () #9  0x00000000013a92f8 in opidrv () #10 0x0000000001fa3936 in sou2o () #11 0x000000000072d40b in opimai_real () #12 0x000000000072d35c in main () SQL> oradebug setmypid; Statement processed. SQL> oradebug dump systemstate 266; Statement processed. ?????? kcbbfbp buffer cache handle ?  SO state object ? BH BUFFER HEADER  link???     ----------------------------------------     SO: 0x11b452348, type: 3, owner: 0x11b305810, flag: INIT/-/-/0x00     (call) sess: cur 11b41bd18, rec 0, usr 11b41bd18; depth: 0       ----------------------------------------       SO: 0x1182dc750, type: 24, owner: 0x11b452348, flag: INIT/-/-/0x00       (buffer) (CR) PR: 0x11b305810 FLG: 0x108000       class bit: (nil)       kcbbfbp: [BH: 0xf2fc69f8, LINK: 0x1182dc790]       where: kdswh05: kdsgrp, why: 0       BH (0xf2fc69f8) file#: 1 rdba: 0x00410652 (1/67154) class: 1 ba: 0xf297c000         set: 3 blksize: 8192 bsi: 0 set-flg: 2 pwbcnt: 272         dbwrid: 0 obj: 54208 objn: 54202 tsn: 0 afn: 1         hash: [f2fc47f8,1181f3038] lru: [f2fc6b88,f2fc6968]         obj-flags: object_ckpt_list         ckptq: [1182ecf38,1182ecf38] fileq: [1182ecf58,1182ecf58] objq: [108712a28,108712a28]         use: [1182dc790,1182dc790] wait: [NULL]         st: XCURRENT md: SHR tch: 12         flags: buffer_dirty gotten_in_current_mode block_written_once                 redo_since_read         LRBA: [0xc7.73b.0] HSCN: [0x0.1cbe52] HSUB: [1]         Using State Objects           ----------------------------------------           SO: 0x1182dc750, type: 24, owner: 0x11b452348, flag: INIT/-/-/0x00           (buffer) (CR) PR: 0x11b305810 FLG: 0x108000           class bit: (nil)           kcbbfbp: [BH: 0xf2fc69f8, LINK: 0x1182dc790]           where: kdswh05: kdsgrp, why: 0         buffer tsn: 0 rdba: 0x00410652 (1/67154)         scn: 0x0000.001cbe52 seq: 0x01 flg: 0x02 tail: 0xbe520601         frmt: 0x02 chkval: 0x0000 type: 0x06=trans data tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x0  cc: 1 col  0: [ 2]  c1 02 tab 0, row 1, @0x1f94 tl: 6 fb: --H-FL-- lb: 0x2  cc: 1 col  0: [ 2]  c1 15 end_of_block_dump         (buffer) (CR) PR: 0x11b305810 FLG: 0x108000 st: XCURRENT md: SHR tch: 12 ? buffer header?status= XCURRENT mode=KCBMSHARE KCBMSHR     current share ?????  x$kcbbf ????? cache buffer handle SQL> select distinct KCBBPBH from  x$kcbbf ; KCBBPBH ---------------- 00 00000000F2FC69F8            ==>0xf2fc69f8 SQL> select * from x$kcbbf where kcbbpbh='00000000F2FC69F8'; ADDR                   INDX    INST_ID KCBBFSO_TYP KCBBFSO_FLG KCBBFSO_OWN ---------------- ---------- ---------- ----------- ----------- ----------------   KCBBFFLG    KCBBFCR    KCBBFCM KCBBFMBR         KCBBPBH ---------- ---------- ---------- ---------------- ---------------- KCBBPBF          X0KCBBPBH        X0KCBBPBF        X1KCBBPBH ---------------- ---------------- ---------------- ---------------- X1KCBBPBF        KCBBFBH            KCBBFWHR   KCBBFWHY ---------------- ---------------- ---------- ---------- 00000001182DC750        748          1          24           1 000000011B452348    1081344          1          0 00               00000000F2FC69F8 00000001182DC750 00               00000001182DC750 00 00000001182DC7F8 00                      583          0 SQL> desc x$kcbbf;  Name                                      Null?    Type  ----------------------------------------- -------- ----------------------------  ADDR                                               RAW(8)  INDX                                               NUMBER  INST_ID                                            NUMBER  KCBBFSO_TYP                                        NUMBER  KCBBFSO_FLG                                        NUMBER  KCBBFSO_OWN                                        RAW(8)  KCBBFFLG                                           NUMBER  KCBBFCR                                            NUMBER  KCBBFCM                                            NUMBER  KCBBFMBR                                           RAW(8)  KCBBPBH                                            RAW(8)  KCBBPBF                                            RAW(8)  X0KCBBPBH                                          RAW(8)  X0KCBBPBF                                          RAW(8)  X1KCBBPBH                                          RAW(8)  X1KCBBPBF                                          RAW(8)  KCBBFBH                                            RAW(8)  KCBBFWHR                                           NUMBER  KCBBFWHY                                           NUMBER gdb ?? ?process??????kcbrls release buffer? ???cache buffer handle??? SQL> select distinct KCBBPBH from  x$kcbbf ; KCBBPBH ---------------- 00

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • How do you handle authentication across domains?

    - by William Ratcliff
    I'm trying to save users of our services from having to have multiple accounts/passwords. I'm in a large organization and there's one group that handles part of user authentication for users who are from outside the facility (primarily for administrative functions). They store a secure cookie to establish a session and communicate only via HTTPS via the browser. Sessions expire either through: 1) explicit logout of the user 2) Inactivity 3) Browser closes My team is trying to write a web application to help users analyze data that they've taken (or are currently taking) while at our facility. We need to determine if a user is 1) authenticated 2) Some identifier for that user so we can store state for them (what analysis they are working on, etc.) So, the problem is how do you authenticate across domains (the authentication server for the other application lives in a border region between public and private--we will live in the public region). We have come up with some scenarios and I'd like advice about what is best practice, or if there is one we haven't considered. Let's start with the case where the user is authenticated with the authentication server. 1) The authentication server leaves a public cookie in the browser with their primary key for a user. If this is deemed sensitive, they encrypt it on their server and we have the key to decrypt it on our server. When the user visits our site, we check for this public cookie. We extract the user_id and use a public api for the authentication server to request if the user is logged in. If they are, they send us a response with: response={ userid :we can then map this to our own user ids. If necessary, we can request additional information such as email-address/display name once (to notify them if long running jobs are done, or to share results with other people, like with google_docs). account_is_active:Make sure that the account is still valid session_is_active: Is their session still active? If we query this for a valid user, this will have a side effect that we will reset the last_time_session_activated value and thus prolong their session with the authentication server last_time_session_activated: let us know how much time they have left ip_address_session_started_from:make sure the person at our site is coming from the same ip as they started the session at } Given this response, we either accept them as authenticated and move on with our app, or redirect them to the login page for the authentication server (question: if we give an encrypted portion of the response (signed by us) with the page to redirect them to, do we open any gaping security holes in the authentication server)? The flaw that we've found with this is that if the user visits evilsite.com and they look at the session cookie and send a query to the public api of the authentication server, they can keep the session alive and if our original user leaves the machine without logging out, then the next user will be able to access their session (this was possible before, but having the session alive eternally makes this worse). 2) The authentication server redirects all requests made to our domain to us and we send responses back through them to the user. Essentially, they act as a proxy. The advantage of this is that we can handshake with the authentication server, so it's safe to be trusted with the email address/name of the user and they don't have to reenter it So, if the user tries to go to: authentication_site/mysite_page1 they are redirected to mysite. Which would you choose, or is there a better way? The goal is to minimize the "Yet Another Password/Yet another username" problem... Thanks!!!!

    Read the article

  • Has this server been compromised?

    - by Griffo
    A friend is running a VPS (CentOS) His business partner was the sysadmin but has left him high and dry to look after the system. So, I've been asked to help out in fixing an apparent spam problem. His IP address got blacklisted for unsolicited mail. I'm not sure where to look for a problem, but I started with netstat to see what open connections were running. It looks to me like he has remote hosts connected to his SMTP server. Here's the output: Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10029 ESTABLISHED tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10010 ESTABLISHED tcp 0 1 78.153.208.195:35563 news.avanport.pt:smtp SYN_SENT tcp 0 0 78.153.208.195:35559 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 0 0 78.153.208.195:35560 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11647 CLOSING tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11645 CLOSING tcp 0 0 78.153.208.195:35562 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:35561 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:imap 86-41-8-64-dynamic.b-:49446 ESTABLISHED Does this indicate that his server may be acting as an open relay? Mail should only be outgoing from localhost. Apologies for my lack of knowledge but I don't work on linux in my day job. EDIT: Here's some output from /var/log/maillog which looks like it may be the result of spam. If it appears to be the case to others, where should I look next to investigate a root cause? I put the server IP through www.checkor.com and it came back clean. Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.721674 status: local 0/10 remote 9/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886182 delivery 74116: deferral: 200.147.36.15_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_200.147.36.15./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886255 status: local 0/10 remote 8/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898266 delivery 74115: deferral: 187.31.0.11_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_187.31.0.11./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898327 status: local 0/10 remote 7/20 Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137833 delivery 74111: deferral: Sorry,_I_wasn't_able_to_establish_an_SMTP_connection._(#4.4.1)/ Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137914 status: local 0/10 remote 6/20 Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903536 delivery 74000: failure: 209.85.143.27_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_[78.153.208.195_______1]_Our_system_has_detected_an_unusual_rate_of/550-5.7.1_unsolicited_mail_originating_from_your_IP_address._To_protect_our/550-5.7.1_users_from_spam,_mail_sent_from_your_IP_address_has_been_blocked./550-5.7.1_Please_visit_http://www.google.com/mail/help/bulk_mail.html_to_review/550_5.7.1_our_Bulk_Email_Senders_Guidelines._e25si1385223wes.137/ Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903606 status: local 0/10 remote 5/20 Jun 29 00:02:19 vps-1001108-595 qmail-queue-handlers[15501]: Handlers Filter before-queue for qmail started ... EDIT #2 Here's the output of netstat -p with the imap and imaps lines removed. I also removed my own ssh session Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 1 78.153.208.195:40076 any-in-2015.1e100.net:smtp SYN_SENT 24096/qmail-remote. tcp 0 1 78.153.208.195:40077 any-in-2015.1e100.net:smtp SYN_SENT 24097/qmail-remote. udp 0 0 78.153.208.195:48515 125.64.11.158:4225 ESTABLISHED 20435/httpd

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • Does boost::asio makes excessive small heap allocations or am I wrong?

    - by Poni
    #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { data_[bytes_transferred] = '\0'; if(NULL != strstr(data_, "quit")) { this->socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both); this->socket().close(); // how to make this dispatch "handle_read()" with a "disconnected" flag? } else { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { // } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } While experimenting with boost::asio I've noticed that within the calls to async_write()/async_read_some() there is a usage of the C++ "new" keyword. Also, when stressing this echo server with a client (1 connection) that sends for example 100,000 times some data, the memory usage of this program is getting higher and higher. What's going on? Will it allocate memory for every call? Or am I wrong? Asking because it doesn't seem right that a server app will allocate, anything. Can I handle it, say with a memory pool? Another side-question: See the "this-socket().close();" ? I want it, as the comment right to it says, to dispatch that same function one last time, with a disconnection error. Need that to do some clean-up. How do I do that? Thank you all gurus (:

    Read the article

  • Data Source Security Part 3

    - by Steve Felts
    In part one, I introduced the security features and talked about the default behavior.  In part two, I defined the two major approaches to security credentials: directly using database credentials and mapping WLS user credentials to database credentials.  Now it's time to get down to a couple of the security options (each of which can use database credentials or WLS credentials). Set Client Identifier on Connection When "Set Client Identifier" is enabled on the data source, a client property is associated with the connection.  The underlying SQL user remains unchanged for the life of the connection but the client value can change.  This information can be used for accounting, auditing, or debugging.  The client property is based on either the WebLogic user mapped to a database user using the credential map Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} or is the database user parameter directly from the getConnection() method, based on the “use database credentials” setting described earlier. To enable this feature, select “Set Client ID On Connection” in the Console.  See "Enable Set Client ID On Connection for a JDBC data source" http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableCredentialMapping.html in Oracle WebLogic Server Administration Console Help. The Set Client Identifier feature is only available for use with the Oracle thin driver and the IBM DB2 driver, based on the following interfaces. For pre-Oracle 12c, oracle.jdbc.OracleConnection.setClientIdentifier(client) is used.  See http://docs.oracle.com/cd/B28359_01/network.111/b28531/authentication.htm#i1009003 for more information about how to use this for auditing and debugging.   You can get the value using getClientIdentifier()  from the driver.  To get back the value from the database as part of a SQL query, use a statement like the following. “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL”. Starting in Oracle 12c, java.sql.Connection.setClientInfo(“OCSID.CLIENTID", client) is used.  This is a JDBC standard API, although the property values are proprietary.  A problem with setClientIdentifier usage is that there are pieces of the Oracle technology stack that set and depend on this value.  If application code also sets this value, it can cause problems. This has been addressed with setClientInfo by making use of this method a privileged operation. A well-managed container can restrict the Java security policy grants to specific namespaces and code bases, and protect the container from out-of-control user code. When running with the Java security manager, permission must be granted in the Java security policy file for permission "oracle.jdbc.OracleSQLPermission" "clientInfo.OCSID.CLIENTID"; Using the name “OCSID.CLIENTID" allows for upward compatible use of “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL” or use the JDBC standard API java.sql.getClientInfo(“OCSID.CLIENTID") to retrieve the value. This value in the Oracle USERENV context can be used to drive the Oracle Virtual Private Database (VPD) feature to create security policies to control database access at the row and column level. Essentially, Oracle Virtual Private Database adds a dynamic WHERE clause to a SQL statement that is issued against the table, view, or synonym to which an Oracle Virtual Private Database security policy was applied.  See Using Oracle Virtual Private Database to Control Data Access http://docs.oracle.com/cd/B28359_01/network.111/b28531/vpd.htm for more information about VPD.  Using this data source feature means that no programming is needed on the WLS side to set this context; it is set and cleared by the WLS data source code. For the IBM DB2 driver, com.ibm.db2.jcc.DB2Connection.setDB2ClientUser(client) is used for older releases (prior to version 9.5).  This specifies the current client user name for the connection. Note that the current client user name can change during a connection (unlike the user).  This value is also available in the CURRENT CLIENT_USERID special register.  You can select it using a statement like “select CURRENT CLIENT_USERID from SYSIBM.SYSTABLES”. When running the IBM DB2 driver with JDBC 4.0 (starting with version 9.5), java.sql.Connection.setClientInfo(“ClientUser”, client) is used.  You can retrieve the value using java.sql.Connection.getClientInfo(“ClientUser”) instead of the DB2 proprietary API (even if set setDB2ClientUser()).  Oracle Proxy Session Oracle proxy authentication allows one JDBC connection to act as a proxy for multiple (serial) light-weight user connections to an Oracle database with the thin driver.  You can configure a WebLogic data source to allow a client to connect to a database through an application server as a proxy user. The client authenticates with the application server and the application server authenticates with the Oracle database. This allows the client's user name to be maintained on the connection with the database. Use the following steps to configure proxy authentication on a connection to an Oracle database. 1. If you have not yet done so, create the necessary database users. 2. On the Oracle database, provide CONNECT THROUGH privileges. For example: SQL> ALTER USER connectionuser GRANT CONNECT THROUGH dbuser; where “connectionuser” is the name of the application user to be authenticated and “dbuser” is an Oracle database user. 3. Create a generic or GridLink data source and set the user to the value of dbuser. 4a. To use WLS credentials, create an entry in the credential map that maps the value of wlsuser to the value of dbuser, as described earlier.   4b. To use database credentials, enable “Use Database Credentials”, as described earlier. 5. Enable Oracle Proxy Authentication, see "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. 6. Log on to a WebLogic Server instance using the value of wlsuser or dbuser. 6. Get a connection using getConnection(username, password).  The credentials are based on either the WebLogic user that is mapped to a database user or the database user directly, based on the “use database credentials” setting.  You can see the current user and proxy user by executing: “select user, sys_context('USERENV','PROXY_USER') from DUAL". Note: getConnection fails if “Use Database Credentials” is not enabled and the value of the user/password is not valid for a WebLogic Server user.  Conversely, it fails if “Use Database Credentials” is enabled and the value of the user/password is not valid for a database user. A proxy session is opened on the connection based on the user each time a connection request is made on the pool. The proxy session is closed when the connection is returned to the pool.  Opening or closing a proxy session has the following impact on JDBC objects. - Closes any existing statements (including result sets) from the original connection. - Clears the WebLogic Server statement cache. - Clears the client identifier, if set. -The WebLogic Server test statement for a connection is recreated for every proxy session. These behaviors may impact applications that share a connection across instances and expect some state to be associated with the connection. Oracle proxy session is also implicitly enabled when use-database-credentials is enabled and getConnection(user, password) is called,starting in WLS Release 10.3.6.  Remember that this only works when using the Oracle thin driver. To summarize, the definition of oracle-proxy-session is as follows. - If proxy authentication is enabled and identity based pooling is also enabled, it is an error. - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is false, then oracle-proxy-session is treated as true implicitly (it can also be explicitly true). - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is true, then oracle-proxy-session is treated as false.

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • DevConnections Spring 2010 Speaker Evals and Tips

    As a conference speaker, I always look forward to hearing from attendees whether they felt my sessions were valuable and worth their time.  Its always gratifying  get a high score, but of course its the (preferably constructive) criticism thats key to continued improvement.  Im by no means the best technical presenter around, and Im always looking for ways to improve. Ive recently spoken at a few events, including TechEd and an Ohio event called Stir Trek.  DevConnections was actually back in April, but theyre just getting their final evals out to speakers.  TechEd, of course, does online evals so immediately after your talks you can see what people think.  Ill try and post my TechEd evals in the next week or so. I gave 3 talks at DevConnections Spring 2010 / VS2010 Launch which I discussed in this previous blog post.  In this follow-up, Im just going to share some eval info and my thoughts on it, albeit a couple of months later. Pragmatic ASP.NET Tips, Tricks, and Tools Evals Turned In: 27 Overall Eval: 3.74 Average Score: 3.47 89% found the technical level Just Right.  7.4% thought it was too basic (3.6% did not respond).  Since nobody thought the content was Too complex, I could perhaps have added some more complex material, but having about 90% say its Just Right is pretty good. 92% said at least 50% of the material was new to them.  36% said 75% or more was new.  Thats also pretty good, I think. 77.8% can use the information immediately; 15% can use it within 2-6 months (7.2 % no response). Overall 78% rated the session Excellent, 18% Good, 4% Fair. All comments (9): Steve did a great job Excellent session! It was good. Im now super excited to attend Steves other sessions later today.  Very useful. One of the best speakers here.  Bring him back to future conferences please. Continue to have this session with new and old stuff.  I always find something I did not know about. Excellent!  This was the best session Ive seen all week. Did not increase font on all pages could not see. For Steve to have had more sessions. Note to self make the fonts bigger across the board.  Otherwise, this is all good for my ego. :)  This is always a very popular session and one I really enjoy giving.  Tips and Tricks talks are pretty easy because you dont have to go in depth with any particular thing, and theyre almost always with existing technology so youre not dealing with betas, lack of documentation, and other issues.  Its an easy session to do well, in my experience, and one which I think attendees definitely appreciate.   Whats New in ASP.NET MVC 2 Evals Turned In: 23 Overall Eval: 3.77 Average Score: 3.47 (wow, I cant believe I scored better on this talk than the tips and tricks talk, which Ive given many times and was more excited about) 96% found the technical level Just Right.  90% found 50% or more of the material to be new.  43% can use the info immediately, and another 43% can use it within 2-6 months I guess that speaks to adoption rates of MVC 2 among my attendees Overall 74% said the session was Excellent, 22% Good.  4% No Response. All Comments (6): Great job, thank you. Great speaker! Really good, a little lost in the code at some points, but great information. Speaker needs to repeat questions from audience for everyone to hear. Exceeded my expectations. Great speaker, very informative. I really do try to religiously repeat questions from the audience for everyone to hear, but obviously I didnt do it 100% of the time.  Note to self remember to repeat questions.  That and making fonts big are really basic speaker best practices, which just goes to prove that fundamentals are always something that can be perfected.   SOLIDify Your ASP.NET MVC 2 Application Evals Turned In: 8 (!) Overall Eval: 3.63 Average Score: 3.47 As I recall this was one of the last talks of the day / show, which might account for the low number of evals turned in.  I dont recall speaking to an empty room for this talk, although it certainly wasnt as crowded as the tips and tricks talk. 100% found the technical level Just Right.  100% found at least half the material new.  62.5% can use it at once and 37.5% within 2-6 months.  62.5% rated the session Excellent overall; 37.5% Good.  Im thinking there were 5 evals with all 4s checked and 3 with all 3s checked (4 = Excellent, 3 = Good) All Comments (3): This covered many topics Ive read about recently, and it helped reinforce them. It was a nice overview of the solid principle, but I thought there might be specifics for MVC2.  I am glad there is not. Move a little slower. Ok, so another fundamental dont go too fast.  Looks like I got one fundamental tip from the comments of each talk. My Take-Aways Remember the fundamentals.  Its worth going through a checklist prior to presenting to make sure these things are fresh in your mind.  Increase all font sizes.  Repeat all questions from audience members without microphones (this is also a great way to stall for time, btw).  Resist the urge to move too quickly especially if youre nervous or short of time.  Writing this up in a blog post also further reinforces these fundamentals for me, which is one of the main reasons why I do it I retain things better when I write them, and even moreso when I write them for public consumption since I have to really think about what Im saying.  And maybe a few of you find this interesting or helpful, which is a bonus. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    2012?2???????????????WebLogic Server 12c?????????Java EE 6?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse 12c??WebLogic Server 12c(???)????Java EE 6??????3??????????????????????????????JPA 2.0??????????·?????????EJB 3.1???????·???????????????(???)???????O/R?????????????JPA 2.0 Java EE 6????????????????????Web?????????????3?????(3????)???????·????????????·????????????????????????????????JPA(Java Persistence API) 2.0???EJB(Enterprise JavaBeans) 3.1???JSF(JavaServer Faces) 2.0????3????????????????·???????????JPA??Java??????????????·?????????????O/R?????????????????????·???????????EJB?Session Bean??????????????????·??????????????????????JSF??????????????????????????????????????? ??????JPA????Oracle Database??EMPLOYEES?????Java??????????????????????Entity Bean??????XML?????????????????????????XML????????????????????????????????????????????????????·?????????????????????????????????????????????????????????????Java EE 6??????JPA 2.0??????????·???????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse(OEPE)??????File????????New?-?Other??????? ??????New??????????????????????????Web?-?Dynamic Web Project???????Next????????????????Dynamic Web Project?????????????Project name????OOW???????????Target Runtime????New Runtime????????? ???New Server Runtime Environment???????????????Oracle?-?Oracle WebLogic Server 12c(12.1.1)???????Next???????????????????????????WebLogic home????C:\Oracle\Middleware\wlserver_12.1???????Finish?????????????WebLogic Home????????????????????????Java home?????????????????????Finish??????????????????????Dynamic Web Project????????????????Finish??????????????????JPA 2.0??????????·?????? ???????????????JPA 2.0???????????????·??????????????????Eclipse??Project Explorer?(??????·???)?????????OOW?????????????????????????????·???????????????Properties?????????????????·???·????????????????????????????Project Facets?????????????JPA??????(?????????????Details?????JPA 2.0?????????????????????)???????????????????Further configuration available????????? ???Modify Faceted Project??????????????????????????????????Connection????????????????????????????Add Connection????????? ??????New Connection Profile????????????????Connection Profile Type????Oracle Database Connection??????Next???????????? ???Specify a Driver and Connection Details???????Drivers????Oracle Database 10g Driver Default???????????Properties?????????????????????SIDxeHostlocalhostPort number1521User nameHRPasswordhr ???????????Test Connection??????????????????Ping Succeeded!?????????????????????????????Finish???????????Modify Faceted Project????????OK????????????????Properties for OOW????????OK?????????????????? ?????????Eclipse????????????????OOW?????????????????·???????????????JPA Tools?-?Generate Entities from Tables...??????? ????Generate Custom Entities???????????????????????????????Schema????HR??????Tables????EMPLOYEES???????????Next???????????? ???????????Next???????????Customize Default Entity Generation??????Package????model???????Finish?????????????JPQL?????????? ?????????Oracle Database??EMPLOYEES??????????????????·????model.Employee.java?????????????????????????????????·?????OOW????Java Resources?-?src?-?model???????Employee.java????????????????????????????????·???Employee????(Employee.java)?package model; import java.io.Serializable; import java.math.BigDecimal; import java.util.Date; import java.util.Set; import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?// Apublic class Employee implements Serializable {        private static final long serialVersionUID = 1L;       @Id  // ?       @Column(name="EMPLOYEE_ID")        private long employeeId;        @Column(name="COMMISSION_PCT")        private BigDecimal commissionPct;        @Column(name="DEPARTMENT_ID")        private BigDecimal departmentId;        private String email;        @Column(name="FIRST_NAME")        private String firstName;       @Temporal( TemporalType.DATE)  //?       @Column(name="HIRE_DATE")        private Date hireDate;        @Column(name="JOB_ID")        private String jobId;        @Column(name="LAST_NAME")        private String lastName;        @Column(name="PHONE_NUMBER")        private String phoneNumber;        private BigDecimal salary;        //bi-directional many-to-one association to Employee<...?...>}  ???????????????·???????????????????????????????????????????@Table(name="")??????@Table??????????????????????????????????????? ?????????????????????????????????????·???????????????? ?????????????????????????????SQL?Data?????????? ???????????????A?????JPA?????????JPQL(Java Persistence Query Language)?????????????JPQL?????SQL???????????????????????????????????????????????????????????????????????????????????Employee.selectByNameEmployee??firstName????????????????????employeeId????????? ?????????????????????import java.util.Date;import java.util.Set;import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?@NamedQueries({       @NamedQuery(name="Employee.selectByName" , query="select e from Employee e where e.firstName like :name order by e.employeeId")})<...?...> ?????????·??????OOW?-?JPA Content?-?persistent.xml??????Connection???????????????Database????JTA data source:???jdbc/test????????????????????????Java EE 6??????JPA 2.0???????????????????????????????????·??????????????????????????????????????SQL????????????????????????·????????????·??????????????XML??????????????????1??????????????????????????????????????????????????????????????????EJB 3.1????????·???????????EJB 3.1????????·?????????????????EJB 3.1?Stateless Session Bean?????·????????????????·???????????????????·??????????????????? EJB3.1?????JPA 2.0???????????·???????????????????????XML???????????????????????????????EJB 3.1?????????·????EJB?????????????????????????????????????????????????????????????? ????????EJB 3.1?Session Bean?????·????????????????????????????????????????????????????public List<Employee> getEmp(String keyword)firstName????????????Employee?????? ????????????????????·???????????OOW????????????·???????????????New?-?Other???????????????????????????????????EJB?-?Session Bean(EJB 3.x)??????NEXT????????????????????Create EJB 3.x Session Bean?????????????Java Package????ejb???class name????EmpLogic???????????State Type????Stateless?????????No-interface???????????????????????Finish???????????? ?????????Stateless Session Bean??????·?????EmpLogic.java????????????????????EmpLogic????·????????EJB?????????????Stateless Session Bean?????????@Stateless?????????????????????????????????????EmpLogic????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       public EmpLogic() {       }} ??????????????????????????????????????·???????????????????????import??????????????????EmpLogic??????????????????????????·???????????????????????import????????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ?import javax.persistence.PersistenceContext;  // ?<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {      @PersistenceContext(unitName = "OOW")  // ?      private EntityManager em;  // ?       public EmpLogic() {       }} ?????????·???????JPA???????????????????·????????????????????????????CRUD???????????????????·????????????EntityManager???????????????????????????1????????????????·???????????????????????@PersistenceContext?????unitName?????????????persistence.xml????persistence-unit???name?????????????? ???????EmpLogic?????·???????????????????????????????????????????????????????????????????????????????EmpLogic????????·???????(EmpLogic.java)?package ejb;import java.util.List;  // ? import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ? import javax.persistence.PersistenceContext;  // ? <...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       @PersistenceContext(unitName = "OOW")  // ?        private EntityManager em;  // ?        public EmpLogic() {       }      @SuppressWarnings("unchecked")  // ?      public List<Employee> getEmp(String keyword) {  // ?             StringBuilder param = new StringBuilder();  // ?             param.append("%");  // ?             param.append(keyword);  // ?             param.append("%");  // ?             return em.createNamedQuery("Employee.selectByName")  // ?                    .setParameter("name", param.toString()).getResultList();  // ?      }} ???EJB 3.1???Stateless Session Bean?????????? ???JSF 2.0???????????????????????????????????????????????????JAX-RS????RESTful?Web??????????????????????

    Read the article

  • git push >> fatal: no configured push destination

    - by Marc
    I'm still going through some guides on RoR and i'm stuck here at "Deploying the demo app" I followed instructions: " With the completion of microposts resources, now is a good time to push the repository up to GitHub: " $ git add . $ git commit -a -m "Finish demo app" $ git push What happened wrong here was the push part.. it outputted this: $ git push fatal: No configured push destination. Either specify the URL from the command-line or configure a remote repository using git remote add < name < url git push < name So i tried following the insturctions by doing this command: $git remote add demo_app 'www.github.com/levelone/demo_app' fatal: remote demo_app already exists. So i push: $git push demo_app fatal: 'www.github.com/levelone/demo_app' does not appear to be a git repository fatal: The remote end hung up unexpectedly What can i do here? Any help would be much appreciated. -Marc

    Read the article

  • Oracle ADF Coverage at OOW

    - by Frank Nimphius
    Below is the schedule for all ADF related sessions at a glance. Note the Meet and greet session added for Wednesday Octiber 3rd from 4.30 pm to 5:30. Oracle ADF and Fusion Development General Session Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM General Session: The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud Marriott Marquis - Salon 8 12:15 PM - 1:15 PM General Session: Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology Moscone West - 3014 1:45 PM - 2:45 PM General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West - 3002/3004 4:45 PM - 5:45 PM General Session: Building Mobile Applications with Oracle Cloud Moscone West - 2002/2004 Conference Session Mon 1 Oct, 2012 Time Title Location 12:15 PM - 1:15 PM Understanding Oracle ADF and Its Role in Oracle Fusion Moscone South - 306 1:45 PM - 2:45 PM Building Performant Oracle ADF Business Components to Meet Tomorrow’s Needs Marriott Marquis - Golden Gate C3 3:15 PM - 4:15 PM End-to-End Oracle ADF Development in Eclipse Marriott Marquis - Golden Gate C3 4:45 PM - 5:45 PM Classic Mistakes with Oracle Application Development Framework Marriott Marquis - Salon 7 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM One Size Doesn’t Fit All: Oracle ADF Architecture Fundamentals Marriott Marquis - Golden Gate C2 10:15 AM - 11:15 AM Oracle Business Process Management/Oracle ADF Integration Best Practices Marriott Marquis - Golden Gate C3 11:45 AM - 12:45 PM Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF Moscone South - 306 11:45 AM - 12:45 PM Secrets of Successful Projects with Oracle Application Development Framework Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM BPM, SOA, and Oracle ADF Combined: Patterns Learned from Oracle Fusion Applications Moscone West - 3003 1:15 PM - 2:15 PM The Future of Forms Is … Oracle Forms (and Friends) Moscone South - 306 5:00 PM - 6:00 PM Best Practices for Integrating SOAP and REST Service into Oracle ADF Marriott Marquis - Golden Gate C2 Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite Moscone West - 3001 10:15 AM - 11:15 AM Visualize This! Best Practices for Data Visualization in Desktop and Mobile Apps Marriott Marquis - Golden Gate C3 10:15 AM - 11:15 AM Set Up Your Oracle ADF Project and Development Team for Productivity: Seven Essential Tips Marriott Marquis - Golden Gate C2 11:45 AM - 12:45 PM How to Migrate an Oracle Forms Application to Oracle ADF Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Oracle ADF: Lessons Learned in Real-World Implementations Moscone South - 309 3:30 PM - 4:30 PM Oracle ADF Implementations Around the Globe: Best Practices Marriott Marquis - Golden Gate C2 3:30 PM - 4:30 PM Oracle Developer Cloud Services Marriott Marquis - Salon 7 4:30 PM - 5:30 PM Oracle JDeveloper and Oracle ADF: What’s New Hilton San Francisco - Continental Ballroom 5 5:00 PM - 6:00 PM Mobile Solutions for Oracle E-Business Suite Applications: Technical Insight Moscone West - 2020 5:00 PM - 6:00 PM Extending Social into Enterprise Applications and Business Processes Marriott Marquis - Golden Gate C3 5:00 PM - 6:00 PM The Tie That Binds: An Introduction to Oracle ADF Bindings Marriott Marquis - Golden Gate C2 Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Moscone West - 3003 11:15 AM - 12:15 PM Deep Dive into Oracle ADF: Advanced Techniques Marriott Marquis - Golden Gate C2 12:45 PM - 1:45 PM Monitor, Analyze, and Troubleshoot Your Oracle ADF Application Marriott Marquis - Golden Gate C2 2:15 PM - 3:15 PM Oracle WebCenter Portal: Creating and Using Content Presenter Templates Marriott Marquis - Golden Gate C2 HOL (Hands-on Lab) Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:45 PM - 2:45 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 4:45 PM - 5:45 PM Application Lifecycle Management with Oracle JDeveloper: Hands-on Lab Marriott Marquis - Salon 3/4 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 11:45 AM - 12:45 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:15 PM - 2:15 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:30 PM - 4:30 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 11:15 AM - 12:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 12:45 PM - 1:45 PM Oracle ADF for Java EE Developers with Oracle Enterprise Pack for Eclipse Marriott Marquis - Salon 3/4 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 Time Title Location 6:15 PM - 7:00 PM How to Get Started with Oracle ADF Marriott Marquis - Club Room 7:15 PM - 8:00 PM Building Next-Generation Applications with Oracle ADF and Oracle BPM Marriott Marquis - Golden Gate C3 7:15 PM - 8:00 PM The Future of Oracle Forms: Upgrade, Modernize, or Migrate? Marriott Marquis - Golden Gate C2 7:15 PM - 8:00 PM Oracle ADF Faces: One Site for Many Devices Marriott Marquis - Golden Gate C1 - User Group Forum (Sunday Only) Sun 30 Sept, 2012 Time Title Location 9:00 AM - 10:00 AM Oracle ADF Immersion: How an Oracle Forms Developer Immersed Himself in the Oracle ADF World Moscone South - 305 10:15 AM - 11:15 AM Deploy with Joy: Using Hudson to Build and Deploy Your Oracle ADF Applications Moscone South - 305 11:30 AM - 12:30 PM ADF EMG User Group: A Peek into the Oracle ADF Architecture of Oracle Fusion Applications Moscone South - 305 12:45 PM - 3:45 PM ADF EMG User Group: Oracle Fusion Middleware Live Application Development Demo Moscone South - 305 3:15 PM - 4:15 PM Mobile Development with Oracle JDeveloper and Oracle ADF Moscone West - 2010 Demos Demo Location Developer Moscone North, Upper Lobby - N-002 Oracle ADF Mobile Development Moscone North, Upper Lobby - N-001 Oracle Eclipse Projects Hilton San Francisco, Grand Ballroom - HHJ-008 Oracle Enterprise Pack for Eclipse Moscone South, Right - S-208 Oracle JDeveloper and Oracle ADF Moscone South, Right - S-207 Exhibits 0 Exhibitor Location Accenture Moscone South - 1813 Moscone South - 2221 Infosys Moscone South - 1701 Moscone South - SMR-005 Innowave Technology Moscone South - 2309 ODTUG Moscone West, Level 2 Lobby - Kiosk in the User Groups Pavilion Oracle ADF Developers Meet Up Wednesday, Oct 03 Time Activity Location 4:30 PM - 5:30 PM Stop by the OTN Lounge and meet other Oracle ADF & Fusion developers as well as product managers and engineers who work on Oracle ADF, ADF Mobile and ADF Essentials. Feedback and questions welcome, or simply stop by and say ‘hi!’ and enjoy free beer. OTN Lounge

    Read the article

  • com.jcraft.jsch.JSchException: UnknownHostKey

    - by Alex
    I don't know how SSH works and I think that's a simple question. How do I fix that exception: com.jcraft.jsch.JSchException: UnknownHostKey: mywebsite.com. RSA key fingerprint is 22:fb:ee:fe:18:cd:aa:9a:9c:78:89:9f:b4:78:75:b4 I know I should verify that key or something, but there is like zero documentation for Jsch. Here is my code it's really straightforward: import com.jcraft.jsch.JSch; import com.jcraft.jsch.Session; public class ssh{ public static void main(String[] arg){ try{ JSch jsch = new JSch(); //create SSH connection String host = "mywebsite.com"; String user = "username"; String password = "123456"; Session session = jsch.getSession(user, host, 22); session.setPassword(password); session.connect(); } catch(Exception e){ System.out.println(e); } } }

    Read the article

  • Compiling libcurl for mingw32 (Windows) on mac os x 10.6

    - by Daniel
    Hello. I'm compiling libcurl for mingw32 as follows: ./configure --prefix=/Users/daniel/mingw32 "CFLAGS= -ABI=32" make make install But when compiling a program using mingw32-gcc: i386-mingw32-gcc -lcurl -o bin/remote-win.exe remote.c i get: In file included from /Users/daniel/mingw32/usr/local/include/curl/curl.h:34, from remote.c:6: /Users/daniel/mingw32/usr/local/include/curl/curlbuild.h:152:26: sys/socket.h: No such file or directory In file included from /Users/daniel/mingw32/usr/local/include/curl/curl.h:34, from remote.c:6: /Users/daniel/mingw32/usr/local/include/curl/curlbuild.h:165: error: syntax error before "curl_socklen_t" In file included from /Users/daniel/mingw32/usr/local/include/curl/curl.h:35, from remote.c:6: /Users/daniel/mingw32/usr/local/include/curl/curlrules.h:143: error: size of array `__curl_rule_01__' is negative /Users/daniel/mingw32/usr/local/include/curl/curlrules.h:153: error: size of array `__curl_rule_02__' is negative I'm pretty sure the error is because curl_socklen_t does not exist on windows. I've tried --target=--mingw32 but still no success. Please help

    Read the article

  • Kernel, dpkg, sudo and apt-get corrupted

    - by TECH4JESUS
    Here are some errors that I am getting: 1) A proper configuration for Firestarter was not found. If you are running Firestarter from the directory you built it in, run make install-data-local to install a configuration, or simply make install to install the whole program. Firestarter will now close. root@p:/# firestarter ** (firestarter:5890): WARNING **: The connection is closed (firestarter:5890): GnomeUI-WARNING **: While connecting to session manager: None of the authentication protocols specified are supported. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ^C 2) Also I cannot apt-get install sudo root@p:/# apt-get install sudo Reading package lists... Done Building dependency tree Reading state information... Done sudo is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-rb-3.0 gir1.2-gstreamer-0.10 libntfs10 python-mako libdmapsharing-3.0-2 rhythmbox-data libx264-116 rhythmbox libiso9660-7 librhythmbox-core5 libvpx0 libmatroska4 gir1.2-gst-plugins-base-0.10 rhythmbox-mozilla rhythmbox-plugin-zeitgeist libattica0 libgpac0.4.5 python-markupsafe libmusicbrainz4c2a rhythmbox-plugin-cdrecorder rhythmbox-plugins libaudiofile0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 9 not fully installed or removed. Need to get 0 B/76.3 MB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y /bin/sh: 1: /usr/sbin/dpkg-preconfigure: not found (Reading database ... 495741 files and directories currently installed.) Preparing to replace linux-image-3.2.0-24-generic 3.2.0-24.39 (using .../linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Preparing to replace linux-image-3.2.0-25-generic 3.2.0-25.40 (using .../linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

  • OTN Architect Day Headed to Reston, VA - May 16

    - by Bob Rhubart
    In 2011 OTN Architect Day made stops in Chicago, Denver, Phoenix, Redwood Shores, and Toronto. The 2012 series begins with OTN Architect Day in Reston, VA on Wednesday May 16. Registration is now open for this free event, but don't get caught napping -- seating is limited, and the event is just 5 weeks away. The information below reflects the most recent updates to the event agenda, including the addition of Oracle ACE Director Kai Yu as the guest keynote speaker. Kai is Senior System Engineer / Architect at Dell, Inc., and has been very busy of late as a speaker at various industry and Oracle User Group events. I'm very happy Kai has agreed to make the trek from his hometown in Austin, TX to share his insight at the Architect Day event in Reston.  If you're in the area, put this one on your calendar. You won't be sorry.   Venue Sheraton Reston Hotel 11810 Sunrise Valley Drive Reston, VA 20191 Event Agenda 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossman Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. 10:00 am - 10:30 am High Availability Infrastructure for Cloud Computing | Kai Yu Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and the related best practices that system architects should consider in cloud infrastructure design to ensure high availability. 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions: (pick one) Innovations in Grid Computing with Oracle Coherence | Bjorn Boe Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. 11:30 pm - 12:15 pm Breakout Sessions: (pick one) Oracle Enterprise Manager | Joe Diemer Oracle Enterprise Manager (EM) provides complete lifecycle management for the cloud - from automated cloud setup to self-service delivery to cloud operations. In this session you'll learn how to take control of your cloud infrastructure with EM features including Consolidation Planning and Self-Service provisioning with Metering and Chargeback. Come hear how Oracle is expanding its management capabilities into the cloud! Rationalization and Defense in Depth - Two Steps Closer to the Clouds | Dave Chappelle Security represents one of the biggest concerns about cloud computing. In this session we'll get past the FUD with a real-world look at some key issues. We'll discuss the infrastructure necessary to support rationalization and security services, explore architecture for defense -in-depth, and deal frankly with the good, the bad, and the ugly in Cloud security. 12:15 pm - 1:15 pm Lunch 1:40 pm - 2:00 pm Panel Discussion - Q&A 2:00 pm - 2:45 pm Breakout Sessions: (pick one) 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Track B: Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation gives an overview of Oracle's Cloud Reference Architecture, which is part of the Cloud Enterprise Technology Strategy (ETS). Concepts covered include common management layer capabilities, service models, resource pools, and use cases. 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussions 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtable 4:15 pm - 5:00 pm Cocktail Reception / Networking Session schedule and content subject to change.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >