Search Results

Search found 37607 results on 1505 pages for 'ms access 97'.

Page 387/1505 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • iftop Shows Lots of Mysterious Connections - Not Showing in netstat

    - by HOLOGRAPHICpizza
    I've just stopped all pretty much all services except sshd on my server (Ubuntu Server 10.04), and when I run iftop I get output that looks like this: 12.5Kb 25.0Kb 37.5Kb 50.0Kb 62.5Kb mqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqq flash.gateway.2wire.net:ssh <=> 172.16.1.151:60405 1.75Kb 1.54Kb 2.22Kb flash.gateway.2wire.net:21095 <=> 69.127.29.20:32582 536b 107b 27b flash.gateway.2wire.net:21095 <=> 190.164.122.134:13557 0b 105b 26b flash.gateway.2wire.net:21095 <=> 79.165.212.195:45138 0b 105b 26b flash.gateway.2wire.net:21095 <=> 151.42.15.151:9031 0b 72b 18b flash.gateway.2wire.net:21095 <=> 88.185.120.179:51413 0b 0b 49b flash.gateway.2wire.net:21095 <=> 178.120.152.97:25924 0b 0b 29b flash.gateway.2wire.net:21095 <=> 109.110.217.77:27868 0b 0b 26b flash.gateway.2wire.net:21095 <=> 84.13.201.90:16509 0b 0b 26b flash.gateway.2wire.net:21095 <=> 171.7.125.224:11777 0b 0b 26b flash.gateway.2wire.net:21095 <=> 115.177.164.170:21360 0b 0b 26b flash.gateway.2wire.net:21095 <=> 50.88.126.18:25540 0b 0b 25b flash.gateway.2wire.net:21095 <=> 223.206.230.163:13431 0b 0b 25b flash.gateway.2wire.net:21095 <=> 78.144.187.26:24515 0b 0b 25b flash.gateway.2wire.net:21095 <=> 83.20.61.211:27572 0b 0b 25b flash.gateway.2wire.net:21095 <=> 82.134.151.42:18448 0b 0b 18b flash.gateway.2wire.net:21095 <=> 126.117.95.247:25316 0b 0b 18b flash.gateway.2wire.net:21095 <=> 116.202.65.230:9044 0b 0b 18b flash.gateway.2wire.net:21095 <=> 88.120.63.205:51413 0b 0b 17b qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cumm: 61.6KB peak: 8.00Kb rates: 1.59Kb 1.38Kb 2.04Kb RX: 18.4KB 1.64Kb 696b 549b 640b TOTAL: 80.0KB 9.64Kb 2.27Kb 1.92Kb 2.66Kb This is the first part (not the unix socket part) of the output of netstat -a: Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:55677 *:* LISTEN tcp 0 0 flash.gateway.2wire:ssh 172.16.1.151:60405 ESTABLISHED tcp 0 48 flash.gateway.2wire:ssh 172.16.1.151:60661 ESTABLISHED tcp6 0 0 [::]:ssh [::]:* LISTEN udp 0 0 *:37790 *:* What could all those strange connections on port 21095 be? And why would they not show up in netstat?? Any advice would be greatly appreciated.

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • Turn off email notification from abrt (Automatic Bug Reporting Tool)

    - by Banjer
    I'm configuring CentOS 6.2 and have seen a few "[abrt] full crash report" emails. I understand that abrt is useful for creating crash dumps and what not, so I don't want to disable the service, I just would like to stop getting the crash report emails. I probably have to add something to the config file in /etc/abrt/abrt.conf. I can't seem to find anything in my searches. Any idea? Thanks. Edit: Here is my abrt.conf, which is rather simple. [root@myhost~]# cat /etc/abrt/abrt.conf # Enable this if you want abrtd to auto-unpack crashdump tarballs which appear # in this directory (for example, uploaded via ftp, scp etc). # Note: you must ensure that whatever directory you specify here exists # and is writable for abrtd. abrtd will not create it automatically. # #WatchCrashdumpArchiveDir = /var/spool/abrt-upload # Max size for crash storage [MiB] or 0 for unlimited # MaxCrashReportsSize = 1000 # Specify where you want to store coredumps and all files which are needed for # reporting. (default:/var/spool/abrt) # #DumpLocation = /var/spool/abrt And a listing of /etc/abrt: [root@myhost~]# ls -la /etc/abrt total 32 drwxr-xr-x. 3 root root 4096 Apr 13 06:14 . drwxr-xr-x. 97 root root 12288 Apr 13 03:50 .. -rw-r--r--. 1 root root 527 Dec 13 22:50 abrt-action-save-package-data.conf -rw-r--r--. 1 root root 572 Dec 13 22:50 abrt.conf -rw-r--r--. 1 root root 175 Dec 13 22:50 gpg_keys drwxr-xr-x. 2 root root 4096 Apr 13 06:13 plugins [root@myhost~]# ls -la /etc/abrt/plugins/ total 12 drwxr-xr-x. 2 root root 4096 Apr 13 06:13 . drwxr-xr-x. 3 root root 4096 Apr 13 06:14 .. -rw-r--r--. 1 root root 278 Dec 13 22:50 CCpp.conf Actually all of those conf files above are only a few lines and do not mention anything about mail, email, or notifications.

    Read the article

  • File/printer sharing issues on network with multiple OSes

    - by DanZ
    My workplace consists of computers running a variety of different operating systems, and I have been running into problems getting some of them to connect to a shared drive and printer over the network. Here is a brief description of the computers involved and the issues I have encountered: 1: Dell desktop, Windows Vista Business-- This is the computer I want the others to connect to. It has a USB printer and eSATA hard drive enclosure that I have set up for sharing, with different accounts for the various users. 2: Fujitsu laptop, Windows XP Tablet edition-- No problems. Can connect to both the shared printer and hard drive. 3: Lenovo laptop, Windows Vista Business 64 bit-- No problems. Can connect to both the shared printer and drive. 4: Apple MacBook, OS 10.4-- Can connect to the shared drive, but not to the shared printer. I am aware that the printer issue is due to a known incompatibility between Vista and OS 10.4 and earlier with regards to Samba. It is not a big problem, however, as this computer can access a network printer. 5: Sony laptop, Windows Vista Home Premium-- Can connect to the shared printer, but not the shared drive. It can see computer 1 and its shared drive on the network, and appears to successfully log in to user accounts. However, if you try to access the shared drive, it says you do not have permission. I have tried both standard and administrator accounts, and none can access the drive from this computer. 6: MacBook Pro, OS 10.5 (there are two of these)-- Can connect to the shared printer, but not the shared drive. They can't see computer 1 on the network. For that matter, they also can't see each other or the older Mac, but can see and access shared folders on the XP machine (computer 2) and can see other PCs in the building. I was able to add the shared printer manually by typing in its network location, but was unable to manually add the shared drive in the same way. So, what I am looking for is suggestions on how to get computers 5 and 6 to connect to the shared drive. Since they can already connect to the shared printer (which is on the same computer as the shared drive), it seems reasonable that they should be able to access the drive as well.

    Read the article

  • Cisco IPSec, nat, and port forwarding don't play well together

    - by Alan
    I have two Cisco ADSL modems configured conventionally to nat the inside traffic to the ISP. That works. I have two port forwards on one of them for SMTP and IMAP from the outside to the inside this provides external access to the mail server. This works. The modem doing the port forwarding also terminates PPTP VPN traffic. There are two DNS servers one inside the office which resolves mail to the local address, one outside the office which resolves mail for the rest of the world to the external interface. That all works. I recently added an IPSec VPN between the two modems and that works for every thing EXCEPT connections over the IPSec VPN to the mail server on port 25 or 143 from workstations on the remote lan. It would seem that the modem with the port forwards is confusing traffic from the mail server destined for a machine on the other side of the IPSec VPN for traffic that should go back to a port forward connection. PPTP VPN traffic to the mail server is fine. Is this a scenario anybody is familiar with and are there any suggestions on how to work around it? Many thanks Alan But wait there is more..... This is the strategic parts of the nat config. A route map is used to exclude the lans that are reachable via IPSec tunnels from being Nated. int ethernet0 ip nat inside int dialer1 ip nat outside ip nat inside source route-map nonat interface Dialer1 overload route-map nonat permit 10 match ip address 105 access-list 105 remark *** Traffic to NAT access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.9.0 0.0.0.255 access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.48.0 0.0.0.255 access-list 105 permit ip 192.168.1.0 0.0.0.255 any ip nat inside source static tcp 192.168.1.241 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.241 143 interface Dialer1 143 At the risk of answering my own question, I resolved this outside the Cisco realm. I bound a secondary ip address to mail server 192.168.1.244, changed the port forwards to use it while leaving all the local and IPSec traffic to use 192.168.1.241 and the problem was solved. New port forwards. ip nat inside source static tcp 192.168.1.244 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.244 143 interface Dialer1 143 Obviously this is a messy solution and being able to fix this in the Cisco would be preferable.

    Read the article

  • How to bind old user's SID to new user to remain NTFS file ownership and permissions after freshly reinstall of Windows?

    - by LiuYan ??
    Each time we reinstalled Windows, it will create a new SID for user even the username is as same as before. // example (not real SID format, just show the problem) user SID -------------------- liuyan S-old-501 // old SID before reinstall liuyan S-new-501 // new SID after reinstall The annoying problem after reinstall is NTFS file owership and permissions on hard drive disk are still associated with old user's SID. I want to keep the ownership and permission setting of NTFS files, then want to let the new user take the old user's SID, so that I can access files as before without permission problem. The cacls command line tool can't be used in such situation, because the file does belongs to new user, so it will failed with Access is denied error. and it can't change ownership. Even if I can change the owership via SubInACL tool, cacls can't remove the old user's permission because the old user does not exist on new installation, and can't copy the old user's permission to new user. So, can we simply bind old user's SID to new user on the freshly installed Windows ? Sample test batch @echo off REM Additional tools used in this script REM PsGetSid http://technet.microsoft.com/en-us/sysinternals/bb897417 REM SubInACL http://www.microsoft.com/en-us/download/details.aspx?id=23510 REM REM make sure these tools are added into PATH set account=MyUserAccount set password=long-password set dir=test set file=test.txt echo Creating user [%account%] with password [%password%]... pause net user %account% %password% /add psgetsid %account% echo Done ! echo Making directory [%dir%] ... pause mkdir %dir% dir %dir%* /q echo Done ! echo Changing permissions of directory [%dir%]: only [%account%] and [%UserDomain%\%UserName%] has full access permission... pause cacls %dir% /G %account%:F cacls %dir% /E /G %UserDomain%\%UserName%:F dir %dir%* /q cacls %dir% echo Done ! echo Changing ownership of directory [%dir%] to [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q echo Done ! echo RunAs [%account%] user to write a file [%file%] in directory [%dir%]... pause runas /noprofile /env /user:%account% "cmd /k echo some text %DATE% %TIME% > %dir%\%file%" dir %dir% /q echo Done ! echo Deleting and Recreating user [%account%] (reinstall simulation) ... pause net user %account% /delete net user %account% %password% /add psgetsid %account% echo Done ! %account% is recreated, it has a new SID now echo Now, use this "same" account [%account%] to access [%dir%], it will failed with "Access is denied" pause runas /noprofile /env /user:%account% "cmd /k cacls %dir%" REM runas /noprofile /env /user:%account% "cmd /k type %dir%\%file%" echo Done ! echo Changing ownership of directory [%dir%] to NEW [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q cacls %dir% echo Done ! As you can see, "Account Domain not found" is actually the OLD [%account%] user echo Deleting user [%account%] ... pause net user %account% /delete echo Done ! echo Deleting directory [%dir%]... pause rmdir %dir% /s /q echo Done !

    Read the article

  • Resizing Partitions on Live RHEL/cPanel Server

    - by Timothy R. Butler
    I've resized many partitions over the years on Linux, Windows and Mac OS X -- but always using a GUI. However, the time has come where the preset partition sizes my data center placed on my server aren't the right sizes and I need to resize a production server's disks. I could fiddle with it and probably do OK, but given that it is a production server, I wanted to get some advice about the right way to do this. I do have KVM over IP access, so if it is best to take the server offline and boot off a rescue partition, I can do that. root [/var/lib/mysql]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 2.1G 7.3G 23% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 99M 77M 18M 82% /boot /dev/sda8 884G 463G 376G 56% /home /dev/sda3 9.9G 8.0G 1.5G 85% /usr /dev/sda5 9.9G 9.1G 308M 97% /var /usr/tmpDSK 2.0G 38M 1.8G 3% /tmp As you can see /var and /usr are quite close to being full and I've actually had to symlink some logs on /usr to directories in /home to balance things out. What I would like to do is to add 6-10 GB each to /usr and /var, presumably taking the space from /home. As I think about how the disk is arranged, the best thought I've come up with is to reduce /home by 16 GB, say, and move /var to the spot freed up, then allocating /var's space to /usr. However, that would put /var at the far end of the disk, which seems less than idea, given that MySQL has all of its data on that partition. I'd love to take the space out of the closer end of /usr, but I assume that would take a very arduous (and perhaps risky) process of moving all of the data in /usr around. I seem to recall having such a process fail for me on a computer in the past. The other option might be to merge / and /usr since / is underutilized, though I'm not sure if that's a good idea. Do you have any suggestions both on the best reallocation plan and the commands to use to accomplish it? UPDATE: I should add -- here's the partition table. There's one unused partition, which, if memory serves, was the original tmp location before I created a tmp image: Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ Unusable 1.05* sda1 Boot Primary Linux ext2 106.96* sda2 Primary Linux ext3 10737.42* sda3 Primary Linux ext3 10737.42* sda5 NC Logical Linux ext3 10738.47* sda6 NC Logical Linux swap / Solaris 2148.54* sda7 NC Logical Linux ext3 1074.80* sda8 NC Logical Linux ext3 964098.53*

    Read the article

  • Planning trunk capacity for multiple GbE switches

    - by wuckachucka
    Without measuring throughput (it's at the top of the list; this is just theoretical), I want to know the most standard method for trunking VLANs on multiple Gigabit (GbE) switches to a core Layer 3 GbE switch. Say you have three VLANs: VLAN10 (10.0.0.0/24) Servers: your typical Windows DC/file server, Exchange, and an Accounting/SQL server. VLAN20: (10.0.1.0/24) Sales: needs access to everything on VLAN10; doesn't need access to VLAN30 and vice-versa. VLAN20: (10.0.1.0/24) Support: needs access to everything on VLAN10; doesn't need access to VLAN20 and vice-versa. Here's how I think this should work in my head: Switch #1: Ports 2-20 are assigned to VLAN20; all the Sales workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #1. Switch #2: Ports 2-20 are assigned to VLAN30; all the Support workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #2. Core L3 switch: Ports 2-10 are assigned to VLAN10; all three servers are connected here. With a standard 10/100 x 24 switch, it'll usually come with one or two 1 GbE uplink ports; carrying over this logic to a 10/100/1000 x 24, the "optional" 10 GbE combo ports that most higher-end switches can get shouldn't really be an option. Keep in mind I haven't tested anything yet, I'm primarily moving in this direction for growth (don't want to buy 10/100 switches and have to replace those within a couple of years) and security (being able to control access between VLANs with L3 routing/packet filtering ACLs). Does this sound right? Do I really need the 10 GbE ports? It seems very non-standard and expensive, but it "feels" right when you think about 40 or 50 workstations trunking up to the L3 switch over 1 GbE standard ports. If say 20 workstations want to download a 10 GB image from the servers concurrently, wouldn't the trunk be the bottleneck? At least if the trunk was 10 GbE, you'd have 10x1GbE nodes being able to reach their theoretical max. What about switch stacking? Some of the D-Links I've been looking at have HDMI interfaces for stacking. As far as I know, stacking two switches creates one logical switch, but is this just for management I/O or does the switches use the (assuming it's HDMI 1.3) 10.2 Gbps for carrying data back and forth?

    Read the article

  • Email postfix marked as spam by google

    - by Rodrigo Ferrari
    Hello friends, I searched about this question, found some few answers but no idea how to fix, the problem is that I realy dumb with all this! I configured the postfix and done everything how the install how to told. It send the email, but get marked as spam! The header is this one: Delivered-To: [email protected] Received: by 10.223.86.203 with SMTP id t11cs837410fal; Wed, 12 Jan 2011 04:02:21 -0800 (PST) X-pstn-nxpr: disp=neutral, [email protected] X-pstn-nxp: bodyHash=9c6d0c64fa3a4d663c9968e9545c47d77ae0242e, headerHash=1ab8726bd17a23218309165bd20fe6e0911627cd, keyName=4, rcptHash=178929be6ed8451d98a4df01a463784e6c59b3b4, sourceip=174.121.4.154, version=1 Received: by 10.100.190.13 with SMTP id n13mr537609anf.76.1294833740396; Wed, 12 Jan 2011 04:02:20 -0800 (PST) Return-Path: <[email protected]> Received: from psmtp.com ([74.125.245.168]) by mx.google.com with SMTP id w2si1297960anw.132.2011.01.12.04.02.19; Wed, 12 Jan 2011 04:02:20 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.121.4.154 as permitted sender) client-ip=174.121.4.154; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.121.4.154 as permitted sender) [email protected] Received: from source ([174.121.4.154]) by na3sys010amx168.postini.com ([74.125.244.10]) with SMTP; Wed, 12 Jan 2011 12:02:19 GMT Received: from localhost (server [127.0.0.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by brasilyacht.com.br (Postfix) with ESMTP id 87C121290142; Wed, 12 Jan 2011 09:50:29 -0200 (BRST) From: YachtBrasil <[email protected]> Reply-To: Vendas <[email protected]> Cc: YachtBrasil <[email protected]> To: [email protected] Subject: teste Date: Wed, 12 Jan 2011 09:50:29 -0200 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline MIME-Version: 1.0 Message-Id: <[email protected]> X-pstn-2strike: clear X-pstn-neptune: 0/0/0.00/0 X-pstn-levels: (S: 1.96218/99.81787 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) X-pstn-settings: 3 (1.0000:1.0000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from <[email protected]> [db-null] I'm out of ideas on how to fix this, I think it's dns issue, but I have marked the spf inside my tinydns =( Is there anything I can check to know why this email is marked as spam? Any help will be appreciated! Thanks and sorry for my bad english.

    Read the article

  • File/printer sharing issues on network with multiple OSes

    - by DanZ
    My workplace consists of computers running a variety of different operating systems, and I have been running into problems getting some of them to connect to a shared drive and printer over the network. Here is a brief description of the computers involved and the issues I have encountered: 1: Dell desktop, Windows Vista Business-- This is the computer I want the others to connect to. It has a USB printer and eSATA hard drive enclosure that I have set up for sharing, with different accounts for the various users. 2: Fujitsu laptop, Windows XP Tablet edition-- No problems. Can connect to both the shared printer and hard drive. 3: Lenovo laptop, Windows Vista Business 64 bit-- No problems. Can connect to both the shared printer and drive. 4: Apple MacBook, OS 10.4-- Can connect to the shared drive, but not to the shared printer. I am aware that the printer issue is due to a known incompatibility between Vista and OS 10.4 and earlier with regards to Samba. It is not a big problem, however, as this computer can access a network printer. 5: Sony laptop, Windows Vista Home Premium-- Can connect to the shared printer, but not the shared drive. It can see computer 1 and its shared drive on the network, and appears to successfully log in to user accounts. However, if you try to access the shared drive, it says you do not have permission. I have tried both standard and administrator accounts, and none can access the drive from this computer. 6: MacBook Pro, OS 10.5 (there are two of these)-- Can connect to the shared printer, but not the shared drive. They can't see computer 1 on the network. For that matter, they also can't see each other or the older Mac, but can see and access shared folders on the XP machine (computer 2) and can see other PCs in the building. I was able to add the shared printer manually by typing in its network location, but was unable to manually add the shared drive in the same way. So, what I am looking for is suggestions on how to get computers 5 and 6 to connect to the shared drive. Since they can already connect to the shared printer (which is on the same computer as the shared drive), it seems reasonable that they should be able to access the drive as well.

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • Second network card configuration not working.

    - by Sebas
    I have 4 servers running Centos 5. All of them have two ethernet network cards. I have configured 192.168.1.x IP addresses on their eth0 card. They are all connected to the same switch using their eth0 card and they are all working. I have configured 10.72.11.x IP addresses on their eth1 card.They are all connected to the same switch - a different one from the switch used with eth0 card - using their eth1 card and they are NOT all working. Their configuration files is like: DEVICE=eth1 BOOTPROTO=static IPADDR=10.72.11.236 BROADCAST=10.72.11.191 NETMASK=255.255.255.192 NETWORK=10.72.11.128 HWADDR=84:2B:2B:55:4B:98 IPV6INIT=yes IPV6_AUTOCONF=yes ONBOOT=yes The interfase is starting and configured as I need. [root@sql1 network-scripts]# ifconfig eth0 Link encap:Ethernet HWaddr 84:2B:2B:55:4B:97 inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::862b:2bff:fe55:4b97/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2981 errors:0 dropped:0 overruns:0 frame:0 TX packets:319 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:386809 (377.7 KiB) TX bytes:66134 (64.5 KiB) Interrupt:36 Memory:da000000-da012800 eth1 Link encap:Ethernet HWaddr 84:2B:2B:55:4B:98 inet addr:10.72.11.236 Bcast:10.72.11.191 Mask:255.255.255.192 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:48 Memory:dc000000-dc012800 I also added a route-eth1 file that looks like: 10.0.0.0/8 via 10.72.11.254 Routing looks fine to me: [root@sql1 network-scripts]# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.72.11.192 0.0.0.0 255.255.255.192 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.0.0.0 10.72.11.254 255.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 But I cannot ping one server from the other. [root@sql1 network-scripts]# ping 10.72.11.235 PING 10.72.11.235 (10.72.11.235) 56(84) bytes of data. From 10.72.11.236 icmp_seq=1 Destination Host Unreachable From 10.72.11.236 icmp_seq=2 Destination Host Unreachable From 10.72.11.236 icmp_seq=3 Destination Host Unreachable From 10.72.11.236 icmp_seq=4 Destination Host Unreachable From 10.72.11.236 icmp_seq=5 Destination Host Unreachable From 10.72.11.236 icmp_seq=6 Destination Host Unreachable ^C --- 10.72.11.235 ping statistics --- 7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6033ms , pipe 3 What am I doing wrong?

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • Synergy 1.5 crash (OSX 10.6.8)

    - by Oliver
    THANKS FOR TAKING THE TIME TO READ THIS I recently installed Synergy 1.5 r2278 (for Mac OSX 10.6.8) and was using it fine for most of the day, then it decided to stop working (the only thing I changed systemwise was the screensaver - and then after it started crashing disabled it - to see if it would resolve). When I start Synergy (on the Mac - Client) it says: after about 5 seconds (and successfully connecting to the Server) "synergyc quit unexpectedly" Here is the crash log (w/ binery info removed - too long for post requirements) Process: synergyc [1026] Path: /Applications/Synergy.app/Contents/MacOS/synergyc Identifier: synergy Version: ??? (???) Code Type: X86 (Native) Parent Process: Synergy [1023] Date/Time: 2014-05-28 15:36:17.746 +0930 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Interval Since Last Report: 2144189 sec Crashes Since Last Report: 23 Per-App Interval Since Last Report: 10242 sec Per-App Crashes Since Last Report: 9 Anonymous UUID: 86D5A57C-13D4-470E-AC72-48ACDDDE5EB0 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 Application Specific Information: abort() called Thread 0: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x95cf3afa mach_msg_trap + 10 1 libSystem.B.dylib 0x95cf4267 mach_msg + 68 2 com.apple.CoreFoundation 0x95af02df __CFRunLoopRun + 2079 3 com.apple.CoreFoundation 0x95aef3c4 CFRunLoopRunSpecific + 452 4 com.apple.CoreFoundation 0x95aef1f1 CFRunLoopRunInMode + 97 5 com.apple.HIToolbox 0x93654e04 RunCurrentEventLoopInMode + 392 6 com.apple.HIToolbox 0x93654bb9 ReceiveNextEventCommon + 354 7 com.apple.HIToolbox 0x937dd137 ReceiveNextEvent + 83 8 synergyc 0x000356d0 COSXEventQueueBuffer::waitForEvent(double) + 48 9 synergyc 0x00010dd5 CEventQueue::getEvent(CEvent&, double) + 325 10 synergyc 0x00011fb0 CEventQueue::loop() + 272 11 synergyc 0x00044eb6 CClientApp::mainLoop() + 134 12 synergyc 0x0005c509 standardStartupStatic(int, char**) + 41 13 synergyc 0x000448a9 CClientApp::runInner(int, char**, ILogOutputter*, int (*)(int, char**)) + 137 14 synergyc 0x0005c4b0 CAppUtilUnix::run(int, char**) + 64 15 synergyc 0x000427df CApp::run(int, char**) + 63 16 synergyc 0x00006e65 main + 117 17 synergyc 0x00006dd9 start + 53 Thread 1: 0 libSystem.B.dylib 0x95d607da __sigwait + 10 1 libSystem.B.dylib 0x95d607b6 sigwait$UNIX2003 + 71 2 synergyc 0x00009583 CArchMultithreadPosix::threadSignalHandler(void*) + 67 3 libSystem.B.dylib 0x95d21259 _pthread_start + 345 4 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 2: 0 libSystem.B.dylib 0x95d21aa2 __semwait_signal + 10 1 libSystem.B.dylib 0x95d2175e _pthread_cond_wait + 1191 2 libSystem.B.dylib 0x95d212b1 pthread_cond_timedwait$UNIX2003 + 72 3 synergyc 0x00009476 CArchMultithreadPosix::waitCondVar(CArchCondImpl*, CArchMutexImpl*, double) + 150 4 synergyc 0x0002b18f CCondVarBase::wait(double) const + 63 5 synergyc 0x0002ce68 CSocketMultiplexer::serviceThread(void*) + 136 6 synergyc 0x0002d698 TMethodJob<CSocketMultiplexer>::run() + 40 7 synergyc 0x0002b8f4 CThread::threadFunc(void*) + 132 8 synergyc 0x00008f30 CArchMultithreadPosix::doThreadFunc(CArchThreadImpl*) + 80 9 synergyc 0x0000902a CArchMultithreadPosix::threadFunc(void*) + 74 10 libSystem.B.dylib 0x95d21259 _pthread_start + 345 11 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 3: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x95d1a382 kevent + 10 1 libSystem.B.dylib 0x95d1aa9c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x95d19f59 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x95d19cfe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x95d19781 _pthread_wqthread + 390 5 libSystem.B.dylib 0x95d195c6 start_wqthread + 30 Thread 4: 0 libSystem.B.dylib 0x95d19412 __workq_kernreturn + 10 1 libSystem.B.dylib 0x95d199a8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x95d195c6 start_wqthread + 30 Thread 5 Crashed: 0 libSystem.B.dylib 0x95d610ee __semwait_signal_nocancel + 10 1 libSystem.B.dylib 0x95d60fd2 nanosleep$NOCANCEL$UNIX2003 + 166 2 libSystem.B.dylib 0x95ddbfb2 usleep$NOCANCEL$UNIX2003 + 61 3 libSystem.B.dylib 0x95dfd6f0 abort + 105 4 libSystem.B.dylib 0x95d79b1b _Unwind_Resume + 59 5 synergyc 0x00008fd1 CArchMultithreadPosix::doThreadFunc(CArchThreadImpl*) + 241 6 synergyc 0x0000902a CArchMultithreadPosix::threadFunc(void*) + 74 7 libSystem.B.dylib 0x95d21259 _pthread_start + 345 8 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 5 crashed with X86 Thread State (32-bit): eax: 0x0000003c ebx: 0x95d60f39 ecx: 0xb0288a7c edx: 0x95d610ee edi: 0x00521950 esi: 0xb0288ad8 ebp: 0xb0288ab8 esp: 0xb0288a7c ss: 0x0000001f efl: 0x00000247 eip: 0x95d610ee cs: 0x00000007 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x002fe000 Model: MacBook2,1, BootROM MB21.00A5.B07, 2 processors, Intel Core 2 Duo, 2.16 GHz, 2 GB

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • L2TP connection fails!

    - by a.toraby
    I've installed l2tp-ipsec-vpn but when I try to connect to the vpn server I get error 500. Here are the logs: Jun 17 12:54:37.449 ipsec_setup: Stopping Openswan IPsec... Jun 17 12:54:38.858 Stopping xl2tpd: xl2tpd. Jun 17 12:54:38.859 xl2tpd[1511]: death_handler: Fatal signal 15 received Jun 17 12:54:38.872 ipsec_setup: Starting Openswan IPsec U2.6.37/K3.2.0-23-generic... Jun 17 12:54:39.027 ipsec__plutorun: Starting Pluto subsystem... Jun 17 12:54:39.033 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Jun 17 12:54:39.037 recvref[30]: Protocol not available Jun 17 12:54:39.038 xl2tpd[2442]: This binary does not support kernel L2TP. Jun 17 12:54:39.038 xl2tpd[2444]: xl2tpd version xl2tpd-1.3.1 started on atp-ThinkPad-SL410 PID:2444 Jun 17 12:54:39.038 xl2tpd[2444]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. Jun 17 12:54:39.038 xl2tpd[2444]: Forked by Scott Balmos and David Stipp, (C) 2001 Jun 17 12:54:39.038 xl2tpd[2444]: Inherited by Jeff McAdams, (C) 2002 Jun 17 12:54:39.039 xl2tpd[2444]: Forked again by Xelerance (www.xelerance.com) (C) 2006 Jun 17 12:54:39.039 xl2tpd[2444]: Listening on IP address 0.0.0.0, port 1701 Jun 17 12:54:39.040 Starting xl2tpd: xl2tpd. Jun 17 12:54:39.062 ipsec__plutorun: 002 added connection description "L2TP" Jun 17 12:55:30.753 104 "L2TP" #1: STATE_MAIN_I1: initiate Jun 17 12:55:30.754 010 "L2TP" #1: STATE_MAIN_I1: retransmission; will wait 20s for response Jun 17 12:55:30.754 010 "L2TP" #1: STATE_MAIN_I1: retransmission; will wait 40s for response Jun 17 12:55:30.754 003 "L2TP" #1: ignoring Vendor ID payload [MS NT5 ISAKMPOAKLEY 00000008] Jun 17 12:55:30.754 003 "L2TP" #1: received Vendor ID payload [RFC 3947] method set to=109 Jun 17 12:55:30.754 003 "L2TP" #1: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 109 Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [FRAGMENTATION] Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [MS-Negotiation Discovery Capable] Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [IKE CGA version 1] Jun 17 12:55:30.755 106 "L2TP" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Jun 17 12:55:30.755 010 "L2TP" #1: STATE_MAIN_I2: retransmission; will wait 20s for response Jun 17 12:55:30.755 003 "L2TP" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Jun 17 12:55:30.755 108 "L2TP" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Jun 17 12:55:30.756 004 "L2TP" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Jun 17 12:55:30.756 117 "L2TP" #2: STATE_QUICK_I1: initiate Jun 17 12:55:30.756 010 "L2TP" #2: STATE_QUICK_I1: retransmission; will wait 20s for response Jun 17 12:55:30.756 003 "L2TP" #2: ignoring informational payload, type IPSEC_RESPONDER_LIFETIME msgid=6b03ff69 Jun 17 12:55:30.756 003 "L2TP" #2: NAT-Traversal: received 2 NAT-OA. ignored because peer is not NATed Jun 17 12:55:30.756 003 "L2TP" #2: our client subnet returned doesn't match my proposal - us:192.168.1.3/32 vs them:109.162.174.235/32 Jun 17 12:55:30.757 003 "L2TP" #2: Allowing questionable proposal anyway [ALLOW_MICROSOFT_BAD_PROPOSAL] Jun 17 12:55:30.757 004 "L2TP" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x23af21f8 <0xdb4a87b6 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Jun 17 12:55:31.759 xl2tpd[2444]: Connecting to host x.x.x.x, port 1701 Jun 17 12:55:32.021 xl2tpd[2444]: Connection established to x.x.x.x, 1701. Local: 4720, Remote: 200 (ref=0/0). Jun 17 12:55:32.023 xl2tpd[2444]: Calling on tunnel 4720 Jun 17 12:55:32.454 xl2tpd[2444]: Call established with x.x.x.x, Local: 9667, Remote: 3, Serial: 1 (ref=0/0) Jun 17 12:55:32.456 xl2tpd[2444]: start_pppd: I'm running: Jun 17 12:55:32.456 xl2tpd[2444]: "/usr/sbin/pppd" Jun 17 12:55:32.457 xl2tpd[2444]: "passive" Jun 17 12:55:32.458 xl2tpd[2444]: "nodetach" Jun 17 12:55:32.458 xl2tpd[2444]: ":" Jun 17 12:55:32.459 xl2tpd[2444]: "file" Jun 17 12:55:32.459 xl2tpd[2444]: "/etc/ppp/L2TP.options.xl2tpd" Jun 17 12:55:32.460 xl2tpd[2444]: "ipparam" Jun 17 12:55:32.461 xl2tpd[2444]: "x.x.x.x" Jun 17 12:55:32.462 xl2tpd[2444]: "/dev/pts/1" Jun 17 12:55:32.583 pppd[2711]: Plugin passprompt.so loaded. Jun 17 12:55:32.583 pppd[2711]: pppd 2.4.5 started by root, uid 0 Jun 17 12:55:32.619 pppd[2711]: Using interface ppp0 Jun 17 12:55:32.620 pppd[2711]: Connect: ppp0 <--> /dev/pts/1 Jun 17 12:55:33.693 pppd[2711]: /usr/bin/L2tpIPsecVpn exited with code 0 Jun 17 12:55:34.454 [ERROR 404] Authentication failed: closing connection to 'L2TP' Jun 17 12:55:34.456 pppd[2711]: MS-CHAP authentication failed: E=691 Authentication failure Jun 17 12:55:34.457 pppd[2711]: CHAP authentication failed Jun 17 12:55:34.461 Stopping xl2tpd: xl2tpd. Jun 17 12:55:34.462 xl2tpd[2444]: death_handler: Fatal signal 15 received Jun 17 12:55:34.463 pppd[2711]: Modem hangup Jun 17 12:55:34.463 pppd[2711]: Connection terminated. Jun 17 12:55:34.474 ipsec_setup: Stopping Openswan IPsec... Jun 17 12:55:34.482 pppd[2711]: Exit. Jun 17 12:55:35.587 ipsec_setup: ERROR: Module xfrm4_mode_transport is in use Jun 17 12:55:35.665 ipsec_setup: ERROR: Module esp4 is in use I had this problem by ubuntu 11.10 though I can easily connect to the server from windows. I use ubuntu 12.0 64bit

    Read the article

  • Umbraco Permissions Script - Secure Version

    - by Vizioz Limited
    Back in May I blogged about how to set Permissions for Umbraco using SetACL to set the appropriate directory permissions based on the installation recommendations.Recently I have been working on a site for a client who wanted every security item to be locked down as tightly as possible. And so I modified the script based on the Umbraco security best practices, I thought I'd share it with everyone, if I have missed anything, or if anyone has any suggestions on how to improve this, please let me know :)Please refer to my previous post regarding the SetAcl command line application that you will need.I suggest you save the following into a batch file called: umbPermSecure.batecho offREM Script to setup the Security Permissions for an Umbraco siteREM This script will give your machine Network Service the minimum rights requiredREM for Umbraco to workREM I suggest you update this script to also remove any users who do not need REM access to the web foldersREM **** Pre-requisites ****REM You will need to download - http://setacl.sourceforge.net/REM It is assumed that you have stored SetACL in a directory called, C:\SetACL ifREM not, you will need to modify the script.REM **** Usage ****REM You need to pass in the path for the root of your Umbraco directoryREM E.g. umbPermSecure.bat C:\inetpub\umbracoroot@echo umbPermSecure.bat - Script to set Umbraco File and Directory Permissions@echo based on the Umbraco Security Best Practices Document (13th March 2009)@echo Published by Chris Houston - 19th October 2009@echo http://blog.vizioz.com@echo Adding READ only access SetACL.exe -on "%1" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\web.config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\bin" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\umbraco" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ and EXECUTE access SetACL.exe -on "%1\app_code" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\usercontrols" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ, WRITE and MODIFY access SetACL.exe -on "%1\config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\css" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\data" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\masterpages" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\media" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\python" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\scripts" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\xslt" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"

    Read the article

  • Play Your Favorite DOS Games in XP, Vista, and Windows 7

    - by Matthew Guay
    Want to take a trip down memory lane with old school DOS games?  D-Fend Reloaded makes it easy for you to play your favorite DOS games directly on XP, Vista, and Windows 7. D-Fend Reloaded is a great frontend for DOSBox, the popular DOS emulator.  It lets you install and run many DOS games and applications directly from its interface without ever touching a DOS prompt.  It works great on XP, Vista, and Windows 7 32 & 64-bit versions.   Getting Started Download D-Fend Reloaded (link below), and install with the default settings.  You don’t need to install DOSBox, as D-Fend Reloaded will automatically install all the components you need to run DOS games on Windows. D-Fend Reloaded can also be installed as a portable application, so you can run it from a flash drive on any Windows computer by selecting User defined installation. Then select Portable mode installation. Once D-Fend Reloaded is installed, you can go ahead and open the program. Then simply click “Accept all settings” to apply the default settings.   D-Fend is now ready to run all of your favorite DOS games. Installing DOS Games and Applications: To install a DOS game or application, simply drag-and-drop a zip file of the app into D-Fend Reloaded’s window.  D-Fend Reloaded will automatically extract the program… Then will ask you to name the application and choose where to store it — by default it uses the name of the DOS app. Now you’ll see a new entry for the app you just installed.  Simply double-click to run it.   D-Fend will remind you that you can switch out of fullscreen mode by pressing Alt+Enter, and can also close the DOS application by pressing Ctrl+F9.  Press Ok to run the program. Here we’re running Ms. PacPC, a remake of the classic game Ms. Pac-Man, in full-screen mode.  All features work automatically, including sound, and you never have to setup anything from DOS command line — it just works. Here it’s in windowed mode running on Windows 7. Please note that your color scheme may change to Windows Basic while running DOS applications. You can run DOS application just as easily.  Here’s Word 5.5 running in in DOSBox through D-Fend Reloaded… Game Packs: Want to quickly install many old DOS freeware and trial games?  D-Fend Reloaded offers several game packs that let you install dozens of DOS games with only four clicks…just download and run the game pack installer of your choice (link below). Now you’ve got a selection of DOS games to choose from. Here’s a group of poor lemmings walking around … in Windows 7. Conclusion D-Fend Reloaded gives you a great way to run your favorite DOS games and applications directly from XP, Vista, and Windows 7.  Give it a try, and relive your DOS days from the comfort of your Windows desktop. What were some of your favorite DOS games and applications? Leave a comment and let us know. Links Download D-Fend Reloaded Download DOS game packs for D-Fend Reloaded Download Ms. Pac-PC Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanThursday’s Pre-Holiday Lazy Links RoundupFriday Fun: Five More Time Wasting Online GamesFriday Fun: Holiday Themed Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • AspNetCompatibility in WCF Services &ndash; easy to trip up

    - by Rick Strahl
    This isn’t the first time I’ve hit this particular wall: I’m creating a WCF REST service for AJAX callbacks and using the WebScriptServiceHostFactory host factory in the service: <%@ ServiceHost Language="C#" Service="WcfAjax.BasicWcfService" CodeBehind="BasicWcfService.cs" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>   to avoid all configuration. Because of the Factory that creates the ASP.NET Ajax compatible format via the custom factory implementation I can then remove all of the configuration settings that typically get dumped into the web.config file. However, I do want ASP.NET compatibility so I still leave in: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> </system.serviceModel> in the web.config file. This option allows you access to the HttpContext.Current object to effectively give you access to most of the standard ASP.NET request and response features. This is not recommended as a primary practice but it can be useful in some scenarios and in backwards compatibility scenerios with ASP.NET AJAX Web Services. Now, here’s where things get funky. Assuming you have the setting in web.config, If you now declare a service like this: [ServiceContract(Namespace = "DevConnections")] #if DEBUG [ServiceBehavior(IncludeExceptionDetailInFaults = true)] #endif public class BasicWcfService (or by using an interface that defines the service contract) you’ll find that the service will not work when an AJAX call is made against it. You’ll get a 500 error and a System.ServiceModel.ServiceActivationException System error. Worse even with the IncludeExceptionDetailInFaults enabled you get absolutely no indication from WCF what the problem is. So what’s the problem?  The issue is that once you specify aspNetCompatibilityEnabled=”true” in the configuration you *have to* specify the AspNetCompatibilityRequirements attribute and one of the modes that enables or at least allows for it. You need either Required or Allow: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] without it the service will simply fail without further warning. It will also fail if you set the attribute value to NotAllowed. The following also causes the service to fail as above: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.NotAllowed)] This is not totally unreasonable but it’s a difficult issue to debug especially since the configuration setting is global – if you have more than one service and one requires traditional ASP.NET access and one doesn’t then both must have the attribute specified. This is one reason why you’d want to avoid using this functionality unless absolutely necessary. WCF REST provides some basic access to some of the HTTP features after all, although what’s there is severely limited. I also wish that ServiceActivation errors would provide more error information. Getting an Activation error without further info on what actually is wrong is pretty worthless especially when it is a technicality like a mismatched configuration/attribute setting like this.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  WCF  AJAX  

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Custom Error, 404, 401 pages in SharePoint&hellip;

    - by Shawn Cicoria
    In WSS 3.0/MOSS 2007 we had to resort to things like HttpModules [1] for errors, access denied, or for 404 errors updating the WebApp properties [2] Well, in 2010, thanks to Andrew Connell for pointing this out, Todd Carter blogs about what we now have in SPS 2010 here: http://todd-carter.com/post/2010/04/07/An-Expected-Error-Has-Occurred.aspx    [1] http://blogs.msdn.com/ketaanhs/archive/2009/03/16/moss-sharepoint-2007-custom-error-page-and-access-denied-page.aspx [2] http://blogs.msdn.com/jingmeili/archive/2007/04/08/how-to-create-your-own-custom-404-error-page-and-handle-redirect-in-sharepoint-2007-moss.aspx

    Read the article

  • Abstracting away the type of a property

    - by L. De Leo
    In Python luckily most of the times you don't have to write getters and setters to get access to class properties. That said sometimes you'll have to remember that a certain property is a list or whatnot and a property would save you there by abstracting the type and providing a setter to add something to such list for example rather than exposing the list directly. Where do you draw the line between exposing the type directly or wrapping its access in a property? What's the general "pythonic" advice?

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >