Search Results

Search found 1448 results on 58 pages for 'sap connector'.

Page 16/58 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • SBS 2008 R2: Did something change with anonymous relays?

    - by gravyface
    Have noticed that prior documentation on setting up anonymous relays in SBS 2008 no longer work without some additional configuration. Used to be able to follow this documentation, which is basically: setup a new receive connector add the IP address(es) that will be permitted to relay check off "anonymous" under Permission Group and then run the Exchange shell script to grant permissions. Now what seems to be happening is that if the permitted IP address happens to fall within the same address space as another more restrictive Receive Connector (like the "Default SBS08" one) and possibly if it's ahead of the new Receive Connector alphabetically (haven't tested that yet), the relay attempt fails with "Client Was Not Authenticated" error. To get it to work, I had to modify the scope of the "Default SBS08" Receive Connector to exclude the one LAN IP that I wanted to allow relaying for. I can't recall ever having to do this for Exchange 2007 Standard and/or any other SBS 2008 servers I've setup over the last couple of years and I don't remember doing this and the wiki entry I added at the office doesn't mention it either. So my question is, has anyone else experienced this? Has there been a new change with R2 or perhaps an Exchange Service Pack?

    Read the article

  • Multiple Audio Issues

    - by Lerp
    I am having issues with my audio on Ubuntu 12.04, I will try and give as much detail as possible so sorry if there's too much detail. The Problem Audio plays from both speakers and headphone regardless of what connector I choose and regardless of the profile I use. The microphone is constantly being played through headphones & speakers. The headphone audio is extremely quiet but plays from both ears when I select "Headphones" for the connector in Sound Settings. The headphone audio only plays from one ear and is quiet (but not as quiet as above) when I select "Analogue Output" for the connector in Sound Settings. I can only select "Headphones" as the connector in Sound Settings if I set the profile to either "Analogue Stereo Output/Duplex", all others only allow me to choose "Analogue Output" for the connector. Despite the headphone sound issues, the speaker sound is fine apart from the fact that I am not able to select which output is used, they just both play. My headphone and microphone are plugged into the front and my speakers are plugged into the back. What I have tried I have put everything in alsamixer to 100 apart from "Front Mic Boost" which I have set to 0. Command Output aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 arecord -l **** List of CAPTURE Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 2/3 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf7ff8000 irq 70 cat /proc/asound/modules 0 snd_hda_intel cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 Hopefully I have provided enough information, I will happily provide anymore information needed. Thank you. Update Reinstalling alsa-base and pulseaudio fixed the headphone issues I was having.

    Read the article

  • Oracle anuncia resultados de Q3 FY10

    - by Paulo Folgado
    Oracle Reports GAAP EPS of $0.23, Non-GAAP EPS of $0.38New Software Licenses Up 13%, Applications New Licenses Up 21%Oracle Corporation today announced fiscal 2010 Q3 GAAP total revenues were up 17% to $6.4 billion, while non-GAAP total revenues were up 18% to $6.5 billion. Excluding the impact of Sun Microsystems, Inc., which Oracle acquired on January 26, 2010, GAAP total revenue grew 7%. GAAP new software license revenues were up 13% to $1.7 billion, and up 10% to $1.7 billion excluding Sun. GAAP software license updates and product support revenues were up 13% to $3.3 billion, while non-GAAP software license updates and product support revenues were up 12% to $3.3 billion. GAAP operating income was down 5% to $1.8 billion, and GAAP operating margin was 29%. Non-GAAP operating income was up 13% to $2.9 billion, and non-GAAP operating margin was 45%. GAAP net income was down 10% to $1.2 billion, while non-GAAP net income was up 9% to $1.9 billion. GAAP earnings per share were $0.23, down 11% compared to last year while non-GAAP earnings per share were up 9% to $0.38. GAAP operating cash flow on a trailing twelve-month basis was $8.2 billion. "Our solid top line growth, coupled with disciplined expense management, was key in generating $8.0 billion of free cash flow over the last twelve months," said Oracle CFO Jeff Epstein."The Sun integration is going even better than we expected," said Oracle President, Safra Catz. "We believe that Sun will make a significant contribution to our fourth quarter earnings per share as well as meet the profitability goals we set for next year.""Exadata is the fastest growing product in Oracle's history," said Oracle President, Charles Phillips. "Introduced a little over a year ago, the Exadata pipeline is now approaching $400 million with Q4 bookings forecast at nearly $100 million. This strengthens both sales growth and profitability in our Sun server and storage businesses.""Every quarter we grab huge chunks of market share from SAP," said Oracle CEO, Larry Ellison. "SAP's most recent quarter was the best quarter of their year, only down 15%, while Oracle's application sales were up 21%. But SAP is well ahead of us in the number of CEOs for this year, announcing their third and fourth, while we only had one."In addition, Oracle's Board of Directors declared a cash dividend of $0.05 per share of outstanding common stock to be paid to stockholders of record as of the close of business on April 14, 2010, with a payment date of May 5, 2010. Future declarations of quarterly dividends and the establishment of future record and payment dates are subject to the final determination of Oracle's Board of Directors.Q3 Earnings Conference Call and WebcastOracle will hold a conference call and web broadcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (800) 214-0694 or (719) 955-1425, Passcode: 567035. To access the live Web broadcast of this event, please visit the Oracle Investor Relations Web site at http://www.oracle.com/investor.

    Read the article

  • AutoVue Success at Siemens Energy!

    - by prasenjit.niyogi(at)oracle.com
    Siemens Improves Review and Collaboration with Visually Enabled Engineering Platform Siemens Energy Incorporated offers products, solutions, and services for the entire energy conversion chain--from power generation and transmission to distribution. The organization primarily serves energy utilities and industrial companies. Siemens faced challenges in the form of: Long design review cycles and potential field service delays that stemmed from users' inability to digitally access, view, and collaborate on design documents for energy-related projects stored in SAP High costs and IT administration complexity that was caused by multiple design visualization tools Learn how the customized integration of Oracle's AutoVue with SAP, thanks to Oracle partner Lifecycle Technology, significantly streamlined design review processes, improved productivity, and eliminated paper-based collaboration for the field service technicians and engineers. Read the complete snapshot here

    Read the article

  • junior / professional / senior categorization

    - by oozoo
    Hey guys, is it just me or is the categorization of developer levels highly subjective? I get the feeling that every company tries to hire experienced developers as juniors because they don't know $technology. For example my own career: I switched technologies a couple of times, while sticking to java as a programming language. For example I first worked for 3 years using JavaSE technologies, the next company I worked for hired me as junior because I didn't have JavaEE experience - while still selling me as professional level to customers (I work in consulting). The next company hired me again as junior because I didn't have SAP experience - they mostly work with SAP Java technologies which is definitely a niche. Still, they are selling all their technology consultants for exactly the same rate while paying them significantly different wages. Now when switching jobs again I feel like this whole thing is going to start all over again because I don't have Spring experience or Oracle knowledge. tl;dr = is my observation totally off base that companies are just using these categorizations as means to keep down wages?

    Read the article

  • I am an Indian, is it possible for me to get a job in Europe ?

    - by Yuva
    Hi, I have just started my career as software Engineer with a reputable company in India. I'm working in SAP ABAP. Chances to grow higher in this company are good but they are slow. I would like to work in European countries where SAP is popular and the options for career growth and pay are better. Is it possible for me to get a job in Europe after being 3 years experienced in ABAP? If so, what are all the things that I should do to get one satisfactory Job?

    Read the article

  • Database Insider - November 2012 issue

    - by Javier Puerta
    The November issue of the Database Insider newsletter is now available. (Full newsletter here) Mark Hurd: Oracle Database Wrap-up from Oracle OpenWorld 2012 Oracle executives kicked off Oracle OpenWorld 2012, discussing the needs of customers, the brand-new Oracle Exadata Database Machine X3, and the latest Oracle Database innovations. (Read More) Webcast: Introduction to Oracle Exadata Database Machine X3 Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow. Webcast: SAP Applications Run Better on Oracle Exadata Find out why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. 

    Read the article

  • HP D530 Startup Error: 512 - Chassis Fan Not Detected

    - by lyrikles
    I'm using the HP D530 Motherboard/CPU that I installed in a new case with a 600W PSU. There was a problem with the onboard chassis fan connector (3-wire) not supplying sufficient power to the chassis fan indicated by the fan spinning very slowly, but I never experienced the "512 Error" at boot. Also, the same fan works perfectly connected directly to the PSU. I disconnected it since I already have plenty of fans connected via the PSU directly. Since then, on startup, I get the error: "512 - Chassis Fan Not Detected" and am asked to "Press F1 to continue". This gets quite annoying since I use this machine remotely (w/ FreeNAS). What could be causing the onboard fan connector to not be giving enough power? If this is unable to be corrected, how can I make the BIOS think there's a chassis fan plugged in without actually plugging a fan into the onboard connector? Would it be possible to jumper the pins without damaging the motherboard or PSU? Thanks,Erik

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • Credentials can not be delegated - Alfresco Share

    - by leftcase
    I've hit a brick wall configuring Alfresco 4.0.d on Redhat 6. I'm using Kerberos authentication, it seems to be working normally, and single sign on is working on the main alfresco app itself. I've been through the configuration steps to get the share app working, but try as I may, I keep getting this error in catalina.out each time a browser accesses http://server:8080/share along with a 'Windows Security' password box. WARN [site.servlet.KerberosSessionSetupPrivilegedAction] credentials can not be delegated! Here's what I've done so far: Using AD users and computers, selected the alfrescohttp account, and selected 'trust this user for delegation to any service (Kerberos only). Copied /opt/alfresco-4.0.d/tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml.sample to share-config-custom.xml and edited like this: <config evaluator="string-compare" condition="Kerberos" replace="true"> <kerberos> <password>*****</password> <realm>MYDOMAIN.CO.UK</realm> <endpoint-spn>HTTP/[email protected]</endpoint-spn> <config-entry>ShareHTTP</config-entry> </kerberos> </config> <config evaluator="string-compare" condition="Remote"> <remote> <keystore> <path>alfresco/web-extension/alfresco-system.p12</path> <type>pkcs12</type> <password>alfresco-system</password> </keystore> <connector> <id>alfrescoCookie</id> <name>Alfresco Connector</name> <description>Connects to an Alfresco instance using cookie-based authentication</description> <class>org.springframework.extensions.webscripts.connector.AlfrescoConnector</class> </connector> <endpoint> <id>alfresco</id> <name>Alfresco - user access</name> <description>Access to Alfresco Repository WebScripts that require user authentication</description> <connector-id>alfrescoCookie</connector-id> <endpoint-url>http://localhost:8080/alfresco/wcs</endpoint-url> <identity>user</identity> <external-auth>true</external-auth> </endpoint> </remote> </config> Setup the /etc/krb5.conf file like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = MYDOMAIN.CO.UK default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac forwardable = true proxiable = true [realms] MYDOMAIN.CO.UK = { kdc = mydc.mydomain.co.uk admin_server = mydc.mydomain.co.uk } [domain_realm] .mydc.mydomain.co.uk = MYDOMAIN.CO.UK mydc.mydomain.co.uk = MYDOMAIN.CO.UK /opt/alfresco-4.0.d/java/jre/lib/security/java.login.config is configured like this: Alfresco { com.sun.security.auth.module.Krb5LoginModule sufficient; }; AlfrescoCIFS { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescocifs.keytab" principal="cifs/server.mydomain.co.uk"; }; AlfrescoHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; com.sun.net.ssl.client { com.sun.security.auth.module.Krb5LoginModule sufficient; }; other { com.sun.security.auth.module.Krb5LoginModule sufficient; }; ShareHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; And finally, the following settings in alfresco-global.conf authentication.chain=kerberos1:kerberos,alfrescoNtlm1:alfrescoNtlm kerberos.authentication.real=MYDOMAIN.CO.UK kerberos.authentication.user.configEntryName=Alfresco kerberos.authentication.cifs.configEntryName=AlfrescoCIFS kerberos.authentication.http.configEntryName=AlfrescoHTTP kerberos.authentication.cifs.password=****** kerberos.authentication.http.password=***** kerberos.authentication.defaultAdministratorUserNames=administrator ntlm.authentication.sso.enabled=true As I say, I've hit a brick wall with this and I'd really appreciate any help you can give me! This question is also posted on the Alfresco forum, but I wondered if any folk here on serverfault have come across similar implementation challenges?

    Read the article

  • Win 7 move ssd from SATA 1 to SATA 0, drive letter from G: to C:

    - by GaryH
    I got a new SSD, plugged it in on my notebook to the available SATA 1 connector and installed Win7 (Ultimate) on it as drive G:. It is working great. Now I would like to move the SSD to the SATA 0 connector and change the drive letter to C:. The existing 500gb HD that has another copy of Win7 (home) on it I will format and connect to the SATA 1 connector as the G: or some other letter drive. Is this possible? Is there software that will go through the registry and "correct" all of the entries for "G:" for everything installed and fix it all up? Or am I better off biting the bullet and setting the hardware where I want it and doing a fresh install of everything? Thanx, G

    Read the article

  • Dell R910 with Integrated PERC H700 Adapter

    - by Alex
    I am in the process of designing an architecture based around a single Dell R910 server running Windows Server 2008 Enterprise. I would like the server to have 8 RAID1 pairs of spinning disks, so I intend to implement: Dell R910 Server Integrated PERC H700 Adapter with 1 SAS expander on each SAS connector (so 8 expanders in total) 7 RAID1 pairs of 143Gb 15K HDD, each paired on one connector using an expander 1 RAID1 pair of 600Gb 10K HDD, paired on the remaining connector using an expander My main concern is not to introduce bottlenecks in this architecture, and I have the following questions. Will the PERC H700 Adapter act as a bottleneck for disk access? Will using SAS expanders for each RAID1 pair cause a bottleneck or would this be as fast as pairing disks directly attached to the SAS connectors? Can I mix the disks, as long as the disks in each RAID1 pair are the same? I assume so. Can anyone recommend any single-to-double SAS Expanders that are known to function well with the H700? Cheers Alex

    Read the article

  • Repair snapped USB flash drive

    - by Richard Slater
    I have a USB Flash Drive that has had the USB connector snapped away from the circuit board. In the past I have had great sucess with soldering the connector back to the circuit board with 4 solid core wires. Unfortunatly this particular device shows up as "Unknown Device" in device manager and displays 0ma power usage. Giving a closer look at the circuit board it appears that the Data + connector has come away from the PCB. Which would explain why it is not recognised. Is there any practicable way of lifting the data from the device? larger version

    Read the article

  • JDBC CLASSPATH Not Working

    - by AeroDroid
    I'm setting up a simple JDBC connection to my working MySQL database on my server. I'm using the Connector-J provided by MySQL. According to their documentation, I'm suppose to create the CLASSPATH variable to point to the directory where the mysql-connector-java-5.0.8-bin.jar is located. I used export set CLASSPATH=/path/mysql-connector-java-5.0.8-bin.jar:$CLASSPATH. When I type echo $CLASSPATH to see if it exists, everything seems fine. But then when I open a new terminal and type echo $CLASSPATH it's no longer there. I think this is the main reason why my Java server won't connect to the JDBC, because it isn't saving the CLASSPATH variable I set. Anyone got suggestions or fixes on how to set up JDBC in the first place?

    Read the article

  • Is it possible to replace the Logitech G500 wire without rebuying the mouse?

    - by leladax
    It has to be replaced ideally in whole, from the point it starts inside the mouse (with a white 4-5 piece of wires connection) to the end (of the USB connector to the computer) or at least to a considerable length because there is fatigue very near the mouse and the more I fix it there with soldering the closer it gets to being unfixable or reaching towards 'inside' the mouse where fixing it will be hard or impossible. So I wonder if there is a way to get a replacement of the whole thing or at least the inside-the-mouse connector to a certain length. Also I wonder if other mice types are identical in the connector of the inside.

    Read the article

  • Bay Speaker Panel Installation

    - by JordanD
    I purchased a bay speaker panel that has a molex connector and a sound connector. Do I need to run the sound connector through my tower and somehow out the back into the IO panel? Or is there supposed to be a place on my motherboard for it to connect to? This is a replacement for normal desktop speakers for me to save desk space. Edit: Is there an adapter for the sound cable to the mother board? If so; what is it called? Thanks

    Read the article

  • Grails SSL TOMCAT

    - by user974459
    I'm implementing grails with SSL and deployed to tomcat 7.0. I have used spring security plugin for SSL. In tomcat, I added <Connector port="80" protocol="HTTP/1.1" connectionTimeout="200000000" redirectPort="443" /> <Connector port="8443" protocol="HTTP/1.1" connectionTimeout="200000000" redirectPort="443" /> <Connector port="443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="${user.home}/.keystore" keystorePass="123456" clientAuth="false" sslProtocol="TLS"/> if I type https://localhost is ok. But my app doesn't work.

    Read the article

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • Visual Studio 2010 Hosting :: Connect to MySQL Database from Visual Studio VS2010

    - by mbridge
    So, in order to connect to a MySql database from VS2010 you need to 1. download the latest version of the MySql Connector/NET from http://www.mysql.com/downloads/connector/net/ 2. install the connector (if you have an older version you need to remove it from Control Panel -> Add / Remove Programs) 3. open Visual Studio 2010 4. open Server Explorer Window (View -> Server Explorer) 5. use Connect to Database button 6. in the Choose Data Source windows select MySql Database and press Continue 7. in the Add Connection window - set server name: 127.0.0.1 or localhost for MySql server running on local machine or an IP address for a remote server - username and password - if the the above data is correct and the connection can be made, you have the possibility to select the database If you want to connect to a MySql database from a C# application (Windows or Web) you can use the next sequence: //define the connection reference and initialize it MySql.Data.MySqlClient.MySqlConnection msqlConnection = null; msqlConnection = new MySql.Data.MySqlClient.MySqlConnection(“server=localhost;user id=UserName;Password=UserPassword;database=DatabaseName;persist security info=False”);     //define the command reference MySql.Data.MySqlClient.MySqlCommand msqlCommand = new MySql.Data.MySqlClient.MySqlCommand();     //define the connection used by the command object msqlCommand.Connection = this.msqlConnection;     //define the command text msqlCommand.CommandText = "SELECT * FROM TestTable;"; try {     //open the connection     this.msqlConnection.Open();     //use a DataReader to process each record     MySql.Data.MySqlClient.MySqlDataReader msqlReader = msqlCommand.ExecuteReader();     while (msqlReader.Read())     {         //do something with each record     } } catch (Exception er) {     //do something with the exception } finally {     //always close the connection     this.msqlConnection.Close(); }.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • How to dual boot Ubuntu 12.10 and Windows XP sp3 on Dell Dimension 8250 desktop using 2 hard drives

    - by user106055
    I'd like instructions to dual boot Ubuntu 12.10 and Windows XP (sp3) on my desktop Dell Dimension 8250 (this is old and has 1.5 GB RAM which is maximum). I will be using 2 hard drives. Windows XP is already on a 120 GB drive and and Ubuntu 12.10 will go on a separate 80 GB hard drive. Both drives are IDE using a 80 conductor cable where the 40 pin blue connector connects to the motherboard. The middle connector is gray and is "normally" used for slave (device 1) and the black connector at the very end of the cable is meant for the master drive (device 0) or a single drive if only one is used. First, I do not wish the XP drive to have its boot modified by Ubuntu in any way. It should remain untouched...virgin. Let me know where the XP drive and the Ubuntu drive should be connected based upon the cable I've mentioned above, as well as jumper settings for both during the whole process. I'm just guessing, but should I remove the XP drive and put the empty Ubuntu drive in its place and install Ubuntu? By the way, I already have made the DVD ISO disk. For your information, the BIOS for this machine is version A03. When I tap F12 to get to the boot menu, I have the following choices: Normal (this will take me to a black screen with white type giving me the choice to boot to XP or to my external USB backup recovery drive) Diskette Drive Hard-Disk Drive c: IDE CD-ROM Drive (Note that if the CD Drive is empty, it will then go to the DVD drive) System Setup IDE Drive Diagnostics Boot to Utility Partition (This is Dell's various testing utilities) Thank you in advance for your help. Guy

    Read the article

  • identation control while developing a small python like language

    - by sap
    Hello, Im developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for identation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop thats inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 end end #after break 2 it would jump right here but since i dont have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: i would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then everytime i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the identation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesnt use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appretiated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on google and had no luck). thanks in advance.

    Read the article

  • Eclipse on windows doesnt start

    - by sap
    I usually do all my java development on linux, using fedora package manager setting up a development environment is easy and fast. Now I have to start using windows but I never used it for java development and im having a few difficulties having it setup. So I downloaded and installed thye java 6 JDK (just the standard edition, not the EE) and installed it. Next I downloaded eclipse classic package, which doesnt have an installer, you just unzip it and run it. I had to add the java bin directory to the PATH variable, which I did. But when I start eclipse.exe I get this: http://img02.imagefra.me/img/img02/1/12/12/f_12c33ivd2m_c79c09f.jpg I already made a new environment variable called CLASSPATH and add the d:/java sdk/lib directory to it, but it the same thing. Am I missing something? Thanks. UPDATE: so i wrote the path to the java.exe on the eclipse.ini file (linking to jvm.dll didnt work) and now it just opens a console window for a few seconds and then closes (doesnt output anything). also launching it like: java -jar plugins/org.eclipse.equinox.launcher_1.0.0.v20070208a.jar make the vm work for about 1-2 seconds and then it returns, with no outputs. UPDATE2: i didnt know it was writting a log file, found it and read it and it said i was using GWT x32 libraries on a x64 VM, so i just downloaded an eclipse x64 version and it worked. i still had to use the .ini trick to say where the JVM is installed. thanks a lot for the help.

    Read the article

  • Need simple advice for graph solving problem

    - by sap
    Hi there, a collegue of mine proposed to me an exercise from an online judge website, which is basically a graph solving problem of an evacuation plan on a small town. i dont need the answer (nor do i want it) i just need an advice on which is the best approach to solving it since im kinda new to these kind of problems. the problem consists of town buildings with workers and fallout shelters in case of a nuclear attack. i have to build an algorithm that will assign the workers of each building to one or more fallout shelters but in a way that some shelters wont became too overcrowded while others remain almost empty (else i would just make the workers go to the nearest one). the problem is this: http://acm.timus.ru/problem.aspx?space=1&num=1237 in case its offline heres the google cached version of it: http://webcache.googleusercontent.com/search?q=cache:t2EPCzezs7AJ:acm.timus.ru/problem.aspx%3Fspace%3D1%26num%3D1237+vladimir+kotov+evacuation+problem&cd=1&hl=pt-PT&ct=clnk&gl=pt what i've done so far is for each building get the nearest shelter and move the number of workers from that build equal to the shelter capacity. then move to the next building. but sometimes the number of workers is greater than the shelter capacity, in that case after i iterate through every building, ill just iterate then again apllying the same algorithm until every building has 0 workers in it, problem is this is hardly the best way to solve it. any tip is welcome, please dont feel like im asking for the answer, i just want an advice in the right direction of solving it. thanks in advance.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >