Search Results

Search found 57672 results on 2307 pages for 'caching application block'.

Page 507/2307 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Easy solution to monitoring & blocking connections to non-malicious services, IP's, and tracking companies

    - by binarybunny
    Our family lives in the middle of nowhere, so the only high-speed internet available is Verizon's 3G mobile broadband. We have the highest package available, yet continually go over the 10GB limit and get charged $10 every 1GB we go over. We run a business from home, so stopping when we hit the limit is not an option. I've found the majority of connections are to Google, Microsoft, Akamai, Facebook, and other web service companies (mainly google). I know these are harmless connections, but when it costs money for them to monitor our web activity it becomes a serious problem. Here's some things I've done, but I'm sure there's something else that could help before blocking a huge set of IP ranges: stopped using windows (on my machine) use MVPS host file on all computers use firefox on all computers (with don't track me option) ad block plugin on all browsers blocking google updates blocking windows updates block images in browsers (when possible) use comodo (paranoia-level style of blocking..) virus-free computers with ESET NOD32 bought router and installed dd-wrt in attempt to block connections more diligently (and throttle bandwidth if it comes to that) Anything I'm missing? I know Google analytics is on almost all websites, as well as FB like buttons but I would like to be able to stop these connections without blocking use of google services like gmail, etc. Any ideas?

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Datacenter IP Addressing and DNS Management

    - by user65248
    Hello everyone Basically we are setting up a small Datacenter, about 300 amps power and max 50 racks, Im saying these coz I wanna u imagine the size and requirements, I have studied networking mostly Microsoft and Windows based systems , but I cant get how the IP addressing and DNS management and configuration works in a Datacenter , and unfortunately I have to setup everything by myself but defe we will have some staff to do some job. Now my questions Datacenter IP Addressing Suppose we have got a block of 200 IP addresses from our ISP, How can I manage these block of IP addresses, is there any software out there to simplify this I heard that using DHCP server in a datacenter is not recommended, otherwise what would u say about MS DHCL server ofc considering we need to have backup serversin case of failur How can I assign a block of IPs to a specific rack, I know with different software and management its different but Im asking how it is done normally IP addresses are exposed to the whole network, what if a customer try to use an IP address and is not assigned to their server or rack , how can I prevent this or how can I track the IP usage DNS Management Im goin to setup at least two servers for our DNS servers, I know nothing about Datacenter DNS system, but I have configured DNS server in normal networks and also for webservers, Now I wanna know What exactly needs to be done for a DNS in a datacenter that is not done for normal networks. How can I configure PTR records why cant I configure PTR records on my webserver side DNS server and it should be done on datacenter DNS server , I mean what is the difference in DC DNS servers that allow us to to so , I know the question is very silly and simple but Im confused Is there any software outthere to allow doing the whole thing, I mean automatically add records to the DNS and also managin IP addresses !? Thanks in advance

    Read the article

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • OpenVPN IPV6 Tunnel Radvd

    - by Arenstar
    Hello.. I have an interesting question regarding ipv6 + openvpn.. My Version is OpenVPN 2.1.1 i have been given a native /64 ipv6 network ( for this example 2001:acb:132:acb::/64 ) The plan was/is, route this block through openvpn and into an office ( for testing purposes ) Soo to explain.. I have a Centos Box as the first linux "router" in a datacenter & a Ubuntu box as the second linux "router" in the office I have created a simple point-to-point tunnel using tun ( based off ipv4 address to start the tunnel ) I have assigned to Centos /sbin/ip addr add fed1::1/128 dev eth0 /sbin/ip addr add fed2::2/128 dev tun0 /sbin/ip route add 2001:acb:132:acb::/64 dev tun0 ## ipv6 Block down the tunnel /sbin/ip route add ::/0 dev eth0 ## Default out to Gateway I have assigned to Ubuntu /sbin/ip addr add fed1::3/128 dev tun0 /sbin/ip addr add fed1::4/128 dev eth0 /sbin/ip route add 2001:acb:132:acb::/64 dev eth0 ## ipv6 Block down to eth0 /sbin/ip route add ::/0 dev tun0 ## Default up the tunnel I have also included on both servers.. sysctl -w net.inet6.ip6.forwarding=1 Looks Good... right??? Wrong.. :( I am not able to ping fed1::1 from fed1::4 (Ubuntu) (can ping :4,:3,:2) However, i can ping fed1::1 fed1::2 from :3 ?????? ( very strange ) I am able to access the internet from any ipv6 interface on the Centos Box but clearly not from the Ubuntu box.. Further, i will eventually run radvd on the Ubuntu box eth0, and autoconf the network with ipv6 address's Anyone with some advice / tips to help me out.. ??? Cheers

    Read the article

  • Windows Firewall Software to Filter Transit Traffic

    - by soonts
    I need to test my networking code for Nintendo Wii under the conditions when some specific Internet server is not available. Wii is connected to my PC with crossover ethernet cable. PC has 2 NICs. PC is connected to hardware router with ethernet cable. The hardware router serves as NAT and has an internet connected to its uplink. I set the Wii to be in the same lan as PC by using Windows XP Network bridge. I can observe the WII network traffic using e.g. Wireshark sniffer. Is there a software firewall that can selectively filter out transit traffic? (e.g. block outgoing TCP connections to 123.45.67.89 to port 443) I tried Outpost Pro 2009 and Comodo. Outpost firewall blocks all transit traffic with it's implicit "block transit packet" rule. If the transit traffic is explicitly allowed by creating the system-wide low level rule, then it's allowed completely and no other filter can selectively block it. Comodo firewall only process rules when the packet has localhost's IP as either source or destination, allowing the rest of the traffic. Any ideas? Thanks in advance! P.S. Platform is Windows XP 32 bit, no other OSes is allowed, Windows ICS (Internet Connection Sharing) doesnt work since the Wii is unable to connect, becides I don't like the idea of adding one more level of NAT.

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Routing public IPs (each a /32) through a VPN to another server

    - by Lee S
    Hopefully the title makes sense; I have a server currently in a colo facility, with many IP addresses routed to it. They are individual IPs and not in a contiguous block. Due to vastly improved connectivity (fibre) at home I am slowly bringing my infrastructure in-house for managability and eventually, cost savings. What I would like to do though is use the IP addresses allocated to my existing server, at home. I have an IP block allocated to me on my new ISP connection, but for a couple of reasons I'd like to make use of the colo ones for now: Ease of transition - lots of domains, dns, hard-coded IPs in programs, etc. Connectivity fallback. If my primary line goes down and switches to fallback 1 (dsl) or fallback 2 (4G), I lose access to the ISP-allocated IP block of IPs that are only presented on the primary WAN interface. What I'd like to achieve is my home virtualisation server (Proxmox/Debian-based) "dials in" to the colo server in the colo facility (also Proxmox/Debian) via VPN or similar, and gets to make use of the IP addresses that currently terminate on the colo box. If the primary connection to my ISP goes down and one of the fallback routes kicks in, the VPN tunnel will just time out and then be re-established on the backup connection instead. I'm sure this is doable, but I have no idea how. I'm not afraid to get my hands dirty, I just don't really know where to start?

    Read the article

  • IIS Request Filtering Rule for User Agent

    - by alexp
    I'm trying to block requests from a certain bot. I've added a request filtering rule, but I know it is still hitting the site because it shows up in Google Analytics. Here is the filtering rule I added: <security> <requestFiltering> <filteringRules> <filteringRule name="Block GomezAgent" scanUrl="false" scanQueryString="false"> <scanHeaders> <add requestHeader="User-Agent" /> </scanHeaders> <denyStrings> <add string="GomezAgent+3.0" /> </denyStrings> </filteringRule> </filteringRules> </requestFiltering> </security> This is an example of the user agent I'm trying to block. Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:13.0;+GomezAgent+3.0)+Gecko/20100101+Firefox/13.0.1 In some ways it seems to work. If I use Chrome to spoof my user agent, I get a 404, as expected. But the bot traffic is still showing up in my analytics. What am I missing?

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • Visual Studio App.config XML Transformation

    - by João Angelo
    Visual Studio 2010 introduced a much-anticipated feature, Web configuration transformations. This feature allows to configure a web application project to transform the web.config file during deployment based on the current build configuration (Debug, Release, etc). If you haven’t already tried it there is a nice step-by-step introduction post to XML transformations on the Visual Web Developer Team Blog and for a quick reference on the supported syntax you have this MSDN entry. Unfortunately there are some bad news, this new feature is specific to web application projects since it resides in the Web Publishing Pipeline (WPP) and therefore is not officially supported in other project types like such as a Windows applications. The keyword here is officially because Vishal Joshi has a nice blog post on how to extend it’s support to app.config transformations. However, the proposed workaround requires that the build action for the app.config file be changed to Content instead of the default None. Also from the comments to the said post it also seems that the workaround will not work for a ClickOnce deployment. Working around this I tried to remove the build action change requirement and at the same time add ClickOnce support. This effort resulted in a single MSBuild project file (AppConfig.Transformation.targets) available for download from GitHub. It integrates itself in the build process so in order to add app.config transformation support to an existing Windows Application Project you just need to import this targets file after all the other import directives that already exist in the *.csproj file. Before – Without App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> After – With App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="C:\MyExtensions\AppConfig.Transformation.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> As a final disclaimer, the testing time was limited so any problem that you find let me know. The MSBuild project invokes the mage tool so the Framework SDK must be installed. Update: I finally had some spare time and was able to check the problem reported by Geoff Smith and believe the problem is solved. The Publish command inside Visual Studio triggers a build workflow different than through MSBuild command line and this was causing problems. I posted a new version in GitHub that should now support ClickOnce deployment with app.config tranformation from within Visual Studio and MSBuild command line. Also here is a link for the sample application used to test the new version using the Publish command with the install location set to be from a CD-ROM or DVD-ROM and selected that the application will not check for updates. Thanks to Geoff for spotting the problem.

    Read the article

  • Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

    - by Brian
    Oracle's Sun Fire X4270 M3 server achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%. Performance Landscape SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results (benchmark version from January 2009 to April 2012) System OS Database Users SAPERP/ECCRelease SAPS SAPS/Proc Date Sun Fire X4270 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Oracle Solaris 10 Oracle Database 11g 8,320 20096.0 EP4(Unicode) 45,570 22,785 10-Apr-12 IBM Flex System x240 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,960 20096.0 EP4(Unicode) 43,520 21,760 11-Apr-12 HP ProLiant BL460c Gen8 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,865 20096.0 EP4(Unicode) 42,920 21,460 29-Mar-12 IBM System x3650 M4 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,855 20096.0 EP4(Unicode) 42,880 21,440 06-Mar-12 Cisco UCS C240 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 DE SQL Server 2008 7,635 20096.0 EP4(Unicode) 41,800 20,900 06-Mar-12 Fujitsu PRIMERGY RX300 S7 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,570 20096.0 EP4(Unicode) 41,320 20,660 06-Mar-12 Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark. Configuration and Results Summary Hardware Configuration: Sun Fire X4270 M3 2 x 2.90 GHz Intel Xeon E5-2690 processors 128 GB memory Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL Software Configuration: Oracle Solaris 10 Oracle Database 11g SAP enhancement package 4 for SAP ERP 6.0 (Unicode) Certified Results (published by SAP): Number of benchmark users: 8,320 Average dialog response time: 0.95 seconds Throughput: Fully processed order line: 911,330 Dialog steps/hour: 2,734,000 SAPS: 45,570 SAP Certification: 2012014 Benchmark Description The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments. SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products. See Also SAP Benchmark Website Sun Fire X4270 M3 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

    Read the article

  • Smart Help with UPK

    - by [email protected]
    A short lesson on how awesome Smart Help is. In Oracle UPK speak, there are targeted and non-targeted applications. Targeted applications are Oracle EBS, PeopleSoft, Siebel, JD Edwards, SAP and a few others. Non-targeted applications are either custom built or other third party off the shelf applications. For most targeted applications you'll see better object recognition (during recording) and also Help Integration for that application. Help integration means that someone technical modifies the help link in your application to call up the UPK content that has been created. If you have seen this presented before, this is usually where the term context sensitive help is mentioned and the Do It mode shows off. The fact that UPK builds context sensitive help for its targeted applications automatically is awesome enough, but there is a whole new world out there and it's called "custom and\or third party apps." For the purposes of Smart Help and this discussion, I'm talking about the browser based applications. How does UPK support these apps? It used to be that you had to have your vendor try to modify the Help link to point to UPK or if your company had control over the applications configuration menus, then you get someone on your team to modify this for you. But as you start to use UPK for more than one, two or three applications, the administration of this starts to become daunting. Multiple administrators, multiple player packages, multiple call points, multiple break points, help doesn't always work the same way for every application (picture the black white infomercial with an IT person trying to configure a bunch of wires or something funny like that). Introducing Smart Help! (in color of course, new IT person, probably wearing a blue shirt and smiling). Smart help eliminates the need to configure multiple browser help integration points, and adds a icon to the users browser itself. You're using your browser to read this now correct? Look up at the icons on your browser, you have the home link icon, print icon, maybe an RSS feed icon. Smart Help is icon that gets added to the users browser just like the others. When you click it, it first recognizes which application you're in and then finds the UPK created material for you and returns the best possible match, for (hold on to your seat now) both targeted and non-targeted applications (browser based applications). But wait, there's more. It does this automatically! You don't have to do anything! All you have to do is record content, UPK and Smart Help do the rest! This technology is not new. There are customers out there today that use this for as many as six applications! The real hero here is SMART MATCH. Smart match is the technology that's used to determine which application you're in and where you are when you click on Smart Help. We'll save that for a one-on-one conversation. Like most other awesome features of UPK, it ships with the product. All you have to do is turn it on. To learn more about Smart Help, Smart Match, Targeted and Non-Targeted applications, contact your UPK Sales Consultant or me directly at [email protected]

    Read the article

  • HTML Presence Controls for Communications Server 14 CodePlex Project

    Showing Presence on the Web If youre running Office Communicator Server 2007 R2, you know that your only out-of-the-box option for showing presence on the web is to use the NameControl ActiveX control that ships as part of Office.  Being an ActiveX control, this obviously means that youre limited to Internet Explorer.  Also, nobody likes ActiveX controls What if you want to show the presence of users in a pure ASP.NET or HTML application and cant assume that the user has Communicator installed you need anASP.NET or HTML presence control.  HTML Presence Controls for Microsoft Communications Server 14 We recently worked with the UC team at Microsoft on a keynote demo for TechEd 2010 in New Orleans.  The demo was for a fictitious airline Fabrikam Airlines that wanted to show the presence of customer service and reservations agents on its website.  Customers could also start an instant message conversation with the agents using a Silverlight web chat window that used WCF to communicate with the backend UCMA application. We built HTML Presence Controls that use AJAX to poll a REST-based WCF service running in IIS and hosting a UCMA 3.0 presence subscription application.   Microsoft has graciously allowed us to publish these on CodePlex so that the development community can benefit from them:  http://htmlpresencecontrols.codeplex.com/ We will be maintaining the CodePlex project as new builds of UCMA 3.0 become available.  Check out the project home page on CodePlex for some more in-depth details on how the controls are implemented. ASP.NET Server Control Implementation Were providing an ASP.NET Server Control implementation that you can use stand-alone or in a GridView or Repeater (or other layout control).  The control has properties that allow you to control its appearance, e.g. you can choose whether or not to show the contacts name or availability text. You can also use the server control in a layout control such as a GridView by putting it in a TemplateColumn and binding to the Sip Uri in the data source. Disclaimer Once we started working on these, we realized why Microsoft hasnt shipped such controls as part of the product.  There are some tradeoffs you have to be aware of when using these controls, heres the high level. Privacy The backend UCMA 3.0 application that subscribes to presence of contacts runs as a trusted application and can thus retrieve the presence of any user in the organization.  Theres currently no good way in UCMA to apply any privacy rules to ensure that the consumer of the presence controls has permission to see the presence of the contacts that the controls are bound to.  Just to be absolutely crystal clear These controls provide a way to query the presence of any user in the organization, regardless of the privacy relationship between the person consuming the controls and the contacts whose presence is being displayed. Were exploring options for a design pattern that would allow you to inject some privacy controls.  Keep in mind though that you would most likely be responsible for implementing this logic, as there is currently no functionality in UCMA that allows you to do that. Polling the WCF REST Service The controls poll the backend WCF service to retrieve the presence of contacts - you can control the refresh interval so that they poll less often. We implemented a caching layer so that the WCF service is always communicating with a presence cache it never communicates directly with Communications Server.  For example, if your web page is showing the presence of sip:[email protected] and 500 people have the page open, the presence cache only contains one instance of the subscription Communications Server is not being polled 500 times for the presence of that contact. Once the presence of a contact changes, it is updated in the cache.  There are some server-based push mechanisms that would work nicely here, such as the one that Outlook Web Access 2010 uses.  Unfortunately we didnt have time to explore these options. Community Contribution Take a look at the project Issue Tracker, there are a couple of things we can use some help with.  Shoot me a note if youre interested in contributing to the project. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • &lsquo;Publish&hellip;&rsquo; Resulting in Directory With No Files

    - by ToStringTheory
    I was pulling my hair out with this one…  Which isn’t good considering I have so little of it left!  I had just upgraded to the Windows Azure 1.7 SDK the day before with no problems, and used the upgraded ‘Publish…’ dialog to successfully publish a website to my hard disk for hosting on an internal development server.  However, when trying to deploy another project to my file system, it said it was successful, but there were no files in the directory.  The only difference, the first project was an Azure project, the second was a standard ASP.Net Web Application.  If you installed the Windows Azure 1.7 SDK, you may want to read this. The Problem At first it appears that there is no problem: However you may remember that when publishing a web application, the output window will generally iterate through each of the directories as it copies the files from that directory over.  Sure enough, when looking at the output directory – there are no files, no bin directory, no nothing… Troubleshooting Since one site published and the other did not, I believed that the failure may have been to a failed SQL Server 2012 installation that happened between publish.  I rolled back the installation, however that did not work either.  I also checked the Configuration Manager dialog, and ensured that the projects were selected to actually build (just checking, even though the output said it built them..)  I checked the properties of the solution and the projects, and a selection of files in the project to make sure that they were selected for content…  Nothing seemed to work. I then decided to uninstall the Azure 1.7 SDK to see if that was the culprit.  When I opened the Windows 7 ‘Uninstall a Program’ dialog, I noticed that the Azure SDK came with 2 extra packages that just so happen to be in a Release Candidate state from Microsoft – ‘Microsoft Web Deploy 3.0’ and ‘Microsoft Web Publish – Visual Studio 2010’.  It dawned on me that the publish dialog must not be just for Azure, since it appeared when I tried to deploy the regular web application as well.  Therefore, it must have been an upgrade to the publish mechanism in Visual Studio.  I uninstalled both of the programs and received my old publish dialog once again, and was able to successfully publish the solution above as I had done before. After celebrating solving the problem, I tried reinstalling the Azure package, to see if it would repair the publishing process. Even though it brought back the updated dialogs, it did not publish any files. Instead of uninstalling and retreating, I now KNEW what the cause was, and these were packages not just for Azure. I now knew a product name to search for. The Solution Sure enough, with the correct search term in Google – ‘microsoft web publish no files’, and setting the timeline to 1 week, I found what I needed - Microsoft Connect - Publish Web Application FAILS! (by Andrew Rits). I am surprised that I missed something that ended up being so simple…  In the Configuration Manager, I had the following settings: This is how I had been building and debugging the solution always…  However, apparently when installing the new Web Publishing package, it does things a little differently in its configuration for publishing: You see the difference?  The configuration here is set to ‘x86’ instead of ‘Any CPU’.  Sure enough, as soon as I switched the configuration to ‘Release – Any CPU’, the deployment built and published all of my files as I expected. Conclusion It was a small change, but apparently the new ‘Publish web application’ defaults to the x86 configuration, thereby not copying any of the project/bin files to the publish target directory.  I spent forever trying things, but this small drop down eluded me until I was able to target that the dialog was actually working apparently, I just didn’t have the correct configuration. I hope that this saves you the hours of frustration and hastened hair loss that it caused me…  I also hope that before Microsoft brings this publishing package out of RC status, that they change the behavior of that menu to default to the settings of the old publish menu for the first time. Happy Coding!

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Start/Stop Window Service from ASP.NET page

    - by kaushalparik27
    Last week, I needed to complete one task on which I am going to blog about in this entry. The task is "Create a control panel like webpage to control (Start/Stop) Window Services which are part of my solution installed on computer where the main application is hosted". Here are the important points to accomplish:[1] You need to add System.ServiceProcess reference in your application. This namespace holds ServiceController Class to access the window service.[2] You need to check the status of the window services before you explicitly start or stop it.[3] By default, IIS application runs under ASP.NET account which doesn't have access rights permission to window service. So, Very Important part of the solution is: Impersonation. You need to impersonate the application/part of the code with the User Credentials which is having proper rights and permission to access the window service. If you try to access window service it will generate "access denied" error.The alternatives are: You can either impersonate whole application by adding Identity tag in web.cofig as:        <identity impersonate="true" userName="" password=""/>This tag will be under System.Web section. the "userName" and "password" will be the credentials of the user which is having rights to access the window service. But, this would not be a wise and good solution; because you may not impersonate whole website like this just to have access window service (which is going to be a small part of code).Second alternative is: Only impersonate part of code where you need to access the window service to start or stop it. I opted this one. But, to be fair; I am really unaware of the code part for impersonation. So, I just googled it and injected the code in my solution in a separate class file named as "Impersonate" with required static methods. In Impersonate class; impersonateValidUser() is the method to impersonate a part of code and undoImpersonation() is the method to undo the impersonation. Below is one example:  You need to provide domain name (which is "." if you are working on your home computer), username and password of appropriate user to impersonate.[4] Here, it is very important to note that: You need to have to store the Access Credentials (username and password) which you are going to user for impersonation; to some secured and encrypted format. I have used Machinekey Encryption to store the value encrypted value inside database.[5] So now; The real part is to start or stop a window service. You are almost done; because ServiceController class has simple Start() and Stop() methods to start or stop a window service. A ServiceController class has parametrized constructor that takes name of the service as parameter.Code to Start the window service: Code to Stop the window service: Isn't that too easy! ServiceController made it easy :) I have attached a working example with this post here to start/stop "SQLBrowser" service where you need to provide proper credentials who have permission to access to window service.  hope it would helps./.

    Read the article

  • Auto-cancel reason not found (6, 13906)

    - by Rajesh Sharma
    There are many errors in the application which are never invoked because of appropriate application configuration done at the time of implementation by the solution architects. So typically, as an application end user you would never stumble upon such errors. But what if the application administrator inadvertently changes the configuration/setup in the development, test, QA, or production environment? This is the time when you as an end user are introduced to a brand-new error for which you may not have a clue or understanding to what it means and neither the access/privilege to rectify it.    In this post we'll focus on one such error '6, 13906 - Auto-cancel reason not found'.   You get this error if you have not defined a Bill (Segment) Cancel Reason (Admin Menu, B, Bill Cancel Reason) code with System Default value of Turn off auto-cancel.   Consider a scenario when you are about to final bill an 'Account' for which the bill period's cut-off date you selected is falling on or after the Service Agreement's (SA) end/stop date (basically SA is Stopped with a date earlier than it was billed previously). And for the same 'Account' either: Bill segments exists that end after the SA's end date OR Non-closing bill segments exists that end on the SA's end date (OR closing bill segments that do not end on SA's end date or do not exist at all - remember closing/final bill segment is generated if the SA is in Stopped status).   CC&B detects such scenario and attempts to cancel all such violating bill segments automatically, but NOT if you are generating the bill Online. If online, the system assumes that you know what you are doing, and prompts you with error 2, 13716 - Bill segments that violate the SA (%1) End Date (%2) exist to take necessary action.   If in batch, system automatically cancels these kinds of bill segment(s).   Since this happens in the background, you have to define within the application which System Default Bill (Segment) cancellation reason code identified as Turn off auto-cancel, should be used by the process when it attempts to cancel any such violating bill segments (You already know that you cannot cancel a bill segment without giving a reason for cancellation).   So what exactly happens during batch billing?   Bill Segment generation routine at first determines billing eligibility of the service agreement being billed. One of the billing eligibility criteria is to check the SA's previous bill segments which have end dates greater than the current cut-off date/end date. Technically, the routine retrieves a count of such violating bill segments.     SELECT COUNT (*) FROM CI_BSEG WHERE SA_ID = :SA-ID AND BSEG_STAT_FLG = '50' -- Frozen AND END_DT IS NOT NULL AND (END_DT > '03-JUN-2010' -- Bill segment greater than SA's End Date OR OR (END_DT = '03-JUN-2010' AND CLOSING_BSEG_SW = 'N')) -- Non-closing bill segment ending on SA's end date   If the count is greater than zero, Bill segment generation routine executes another program to auto-cancel such bill segments. Auto-cancel program retrieves the 'Bill Cancel Reason' code which is identified as Turn off auto-cancel. Retrieved cancel reason code is then placed on the bill segments that are being cancelled automatically.   During this process if the routine fails to determine the bill cancel reason code having System Default Turn off auto-cancel because it was not been configured, you get a bill exception 6, 13906 - Auto-cancel reason not found.   Also note that duplicate or multiple System Default codes identified as Turn off auto-cancel are not allowed. CC&B would complain with an error 2, 54201.   Duplicate validation/check is also performed within Auto-cancel routine, if suppose for test purposes you executed a DML statement updating CI_BILL_CAN_RSN.BSCAN_SYS_DFLT_FLG with a value 'T'.

    Read the article

  • Open Your Windows - 4/Maio/10

    - by Claudia Costa
    This FREE technical briefing is designed to show ISVs/SIs how to leverage the Oracle11g Technology especially in the small to medium business. The briefing focuses on Oracle's 11g platform on Windows & Linux and gives a very comprehensive technical competitive overview to the products offered by Microsoft. The technical part covers Integration and Migration aspects of various Microsoft products such as SQL Server, .NET and Active Directory. Register Today! With Oracle11g Oracle introduced various products (ApplicationExpress, OracleExpress Edition, ADF, BPEL) and licenses (Oracle Database Standard Edition One, Application Server Java Edition) specifically targetting the small to medium business market and to show that Oracle Database and Application Server are as easy to use and costs less than Microsoft products in terms of purchase price and ongoing support & maintenance and even much much less when considering the Linux platform.. For those ISVs have already adopted Microsoft .NET framework and using SQL Server as their database layer, we will demostrate that Oracle11g Database is as easy as SQL Server to install, configure, and manage. In addition to that, their application development .NET platform does not requires dramatic changes to enable it to run on the Oracle database. Besides the standard functionalities, Oracle has enhanced some of the advanced features; such as Intermedia, Security, Ref Cursor, etc., tightly integrated with .NET framework so that .NET developers can take full advantage of the Oracle technology, without worrying or programming the complexity components. Objectives ·         Understand Oracle's strategy and commitment on Windows & Linux ·         Learn how to migrate from SQL Server to Oracle on Windows AND Linux ·         Understand that Oracle11g is easy to manage and to install on Windows & Linux ·         Learn how to integrate Windows products with the Oracle11g Platform ·         Learn how Oracle products interoperate & integrate with Microsoft .NET ·         Learn how an Oracle database on Windows will easily be ported to a lower cost Linux database platform and interoperate with a .NET application Prerequisites General Operating System expertise including MS-Windows and Linux. Agenda ·         Welcome and Intro ·         Oracle at a glance ·         Strategy; Small to Medium Business, Microsoft and Linux ·         Oracle 11g Architecture on Linux & Windows ·         Managing Oracle 11g on Linux & Windows ·         Application Development ·         Migration ·         Value propositions for ISVs & Wrap-up   ---------------------------------------------------------------------------- Para mais informações/inscrições, contacte: [email protected].

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on Windows

    - by John Abraham
    As a follow up to our original certification announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following Microsoft Windows operating systems: Release 12.1 (12.1.1 and higher) Microsoft Windows Server (32-bit) (2003, 2008) Microsoft Windows x64 (64-bit) (20031, 20081, 2008 R22) Release 12.0 (12.0.4 and higher) Microsoft Windows Server (32-bit) (2003) Microsoft Windows x64 (64-bit) (2003, 2008)1 Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) Microsoft Windows Server (32-bit) (2003, 20081) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Notes 1: This OS is a 'database tier only' or 'split tier configuration' platform where the application tier must be on a fully certified E-Business Suite platform. 2: This OS is a 'database tier only' platform for Release 11i. For 12.1.1 or higher, it is also supported on the application tier via the migration process outlined in My Oracle Support Document 1188535.1. Pending Certification E-Business Suite 12.0 with 11.2.0.3 Split Tier Certification on Microsoft Windows x64 (64-bit) (2008 R2) is in progress and will be announced separately. This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g MOS Document 1188535.1 - Migrating Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites. Related Articles Database 11.2.0.2 Certified with EBS R12 on IBM: Linux on System z EBS R12 Certified with Database 11gR2 on SLES 11 11gR2 11.2.0.3 Database Certified with E-Business Suite

    Read the article

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >