Search Results

Search found 4015 results on 161 pages for 'packet capture'.

Page 148/161 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • Cannot ping Localhost so I can't shutdown Tomcat

    - by gav
    Hi, I installed Tomcat 6 using the tar-ball via wget. Startup of the server is fine but on shutdown I get a timeout exception. root@88:/usr/local/tomcat/logs# /usr/local/tomcat/bin/shutdown.sh Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar 30-Mar-2010 17:33:41 org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) ... I read that this might be because I have a firewall blocking incoming connections on the shutdown port (8005). I have a default Ubuntu 9.04 installation running on a VPS with no rules in my iptables. How can I tell if that port is blocked? How can I check that the server is listening for connections on 8005? Bizarrely pinging localhost or the IP of my server fails from the server itself, whereas pinging the IP of my server from another machine succeeds. -------- EDIT -------- (In reply to Davey) Thanks for all the tips and suggestions! netstat -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 9611/java tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 28505/mysqld tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 9611/java tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN ... So we can see that tomcat is listening, I just don't seem to be able to reach it. root@88:/usr/local/tomcat# telnet localhost 8005 Trying 127.0.0.1... Trying to telnet to the port Hangs indefinitely. I have no rules in my iptables so I don't think it's a firewall thing. root@88:/usr/local/tomcat# iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is the contents of /etc/hosts 127.0.0.1 localhost.localdomain localhost # Auto-generated hostname. Please do not remove this comment. 88.198.31.14 88.198.31.14 88 88 But I still can't ping localhost... do I need to check a loopback device is enabled properly or something? (I'm unsure how to do that if you do say yes :)). root@88:/usr/local/tomcat# ping localhost PING localhost (127.0.0.1) 56(84) bytes of data. --- localhost ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 5999ms Trying to find out what the loop back is configured as; root@88:~# ifconfig lo lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) SOLUTION THANKS TO DAVEY I needed to bring up the interface (Not sure why it wasn't running). ifconfig lo up did the trick. root@88:~# ifconfig lo up root@88:~# ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@88:~# ping localhost PING localhost.localdomain (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.025 ms Thanks again, Gav

    Read the article

  • Cannot join Win7 workstations to Win2k8 domain

    - by wfaulk
    I am trying to connect a Windows 7 Ultimate machine to a Windows 2k8 domain and it's not working. I get this error: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "example.local": The query was for the SRV record for _ldap._tcp.dc._msdcs.example.local The following domain controllers were identified by the query: dc1.example.local dc2.example.local However no domain controllers could be contacted. Common causes of this error include: Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. The client is in an office connected remotely via MPLS to the data center where our domain controllers exist. I don't seem to have anything blocking connectivity to the DCs, but I don't have total control over the MPLS circuit, so it's possible that there's something blocking connectivity. I have tried multiple clients (Win7 Ultimate and WinXP SP3) in the one office and get the same symptoms on all of them. I have no trouble connecting to either of the domain controllers, though I have, admittedly, not tried every possible port. ICMP, LDAP, DNS, and SMB connections all work fine. Client DNS is pointing to the DCs, and "example.local" resolves to the two IP addresses of the DCs. I get this output from the NetLogon Test command line utility: C:\Windows\System32>nltest /dsgetdc:example.local Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN I have also created a separate network to emulate that office's configuration that's connected to the DC network via LAN-to-LAN VPN instead of MPLS. Joining Windows 7 computers from that remote network works fine. The only difference I can find between the two environments is the intermediate connectivity, but I'm out of ideas as to what to test or how to do it. What further steps should I take? (Note that this isn't actually my client workstation and I have no direct access to it; I'm forced to do remote hands access to it, which makes some of the obvious troubleshooting methods, like packet sniffing, more difficult. If I could just set up a system there that I could remote into, I would, but requests to that effect have gone unanswered.) 2011-08-25 update: I had DCDIAG.EXE run on a client attempting to join the domain: C:\Windows\System32>dcdiag /u:example\adminuser /p:********* /s:dc2.example.local Directory Server Diagnosis Performing initial setup: Ldap search capabality attribute search failed on server dc2.example.local, return value = 81 This sounds like it was able to connect via LDAP, but the thing that it was trying to do failed. But I don't quite follow what it was trying to do, much less how to reproduce it or resolve it. 2011-08-26 update: Using LDP.EXE to try and make an LDAP connection directly to the DCs results in these errors: ld = ldap_open("10.0.0.1", 389); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 389); Error <0x51: Fail to connect to 10.0.0.2. ld = ldap_open("10.0.0.1", 3268); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 3268); Error <0x51: Fail to connect to 10.0.0.2. This would seem to point fingers at LDAP connections being blocked somewhere. (And 0x51 == 81, which was the error from DCDIAG.EXE from yesterday's update.) I could swear I tested this using TELNET.EXE weeks ago, but now I'm thinking that I may have assumed that its clearing of the screen was telling me that it was waiting and not that it had connected. I'm tracking down LDAP connectivity problems now. This update may become an answer.

    Read the article

  • iptables DNS resolution

    - by Favolas
    I have a virtual machine with Fedora 19 acting as a router. This machine as an interface (p8p1) with the IP 172.16.1.254 that is connected to another machine (IP 172.16.1.1) that's simulating the external network. I've installed snort 2.9.2.2, applied the snortsam-2.9.2.2.diff.gz patch and installed snortsam 2.70 on the routermachine In snort.conf besides altering some RULE_PATH I believe I've only added the following line to the file. output alert_fwsam: 127.0.0.1:898/password After doing this two comands: ifconfig p8p1 promisc /usr/local/snort/bin/snort -v -i p8p1 If I ping from the external network to the router IP, I can see the info about the pings. One of the rules that I have is icmp-info.rules that as this single line: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP-INFO Echo Reply"; icode:0; itype:0; classtype:misc-activity; sid:408; rev:6;fwsam: src, 5 minutes;) snortsam.conf as this data: defaultkey password accept localhost keyinterval 30 minutes dontblock 192.168.1.1 # rede local rollbackhosts 50 rollbackthreshold 20 / 30 secs rollbacksleeptime 1 minute logfile /var/log/snort/snortsam.log loglevel 3 daemon nothreads # linha importante para gerar os bloqueios via iptables iptables p8p1 LOG bindip 127.0.0.1 Now I run this command: /usr/local/snort/bin/snort -u snort -i p8p1 -c /etc/snort/snort.conf -l /var/log/snort -Dq Terminal gives this message: Spawning daemon child... My daemon child 2080 lives... Daemon parent exiting (0) and when I runsnortsam in terminal i got this: SnortSam, v 2.70. Copyright (c) 2001-2009 Frank Knobbe . All rights reserved. Plugin 'fwsam': v 2.5, by Frank Knobbe Plugin 'fwexec': v 2.7, by Frank Knobbe Plugin 'pix': v 2.9, by Frank Knobbe Plugin 'ciscoacl': v 2.12, by Ali Basel <[email protected]> Plugin 'cisconullroute': v 2.5, by Frank Knobbe Plugin 'cisconullroute2': v 2.2, by Wouter de Jong <[email protected]> Plugin 'netscreen': v 2.10, by Frank Knobbe Plugin 'ipchains': v 2.8, by Hector A. Paterno <[email protected]> Plugin 'iptables': v 2.9, by Fabrizio Tivano <[email protected]>, Luis Marichal <[email protected]> Plugin 'ebtables': v 2.4, by Bruno Scatolin <[email protected]> Plugin 'watchguard': v 2.7, by Thomas Maier <[email protected]> Plugin 'email': v 2.12, by Frank Knobbe Plugin 'email-blocks-only': v 2.12, by Frank Knobbe Plugin 'snmpinterfacedown': v 2.3, by Ali BASEL <[email protected]> Plugin 'forward': v 2.8, by Frank Knobbe Parsing config file /etc/snortsam.conf... Linking plugin 'iptables'... Checking for existing state file "/var/db/snortsam.state". Found. Reading state file. Starting to listen for Snort alerts. and snortsam.log as an entry like this 2013/10/25, 10:15:17, -, 1, snortsam, Starting to listen for Snort alerts. Now, from the external machine I do ping 172.16.1.254 and it starts showing the info and an alert file is created in /var/log/snort/ that as the info about the PINGS. Something like: [**] [1:408:6] ICMP-INFO Echo Reply [**] [Classification: Misc activity] [Priority: 3] 10/25-10:35:16.061319 172.16.1.254 -> 172.16.1.1 ICMP TTL:64 TOS:0x0 ID:38720 IpLen:20 DgmLen:84 Type:0 Code:0 ID:1389 Seq:1 ECHO REPLY Also, if I run instead /usr/local/snort/bin/snort snort -v -i p8p1 i got this message: Running in packet dump mode --== Initializing Snort ==-- Initializing Output Plugins! Snort BPF option: snort pcap DAQ configured to passive. The DAQ version does not support reload. Acquiring network traffic from "p8p1". ERROR: Can't set DAQ BPF filter to 'snort' (pcap_daq_set_filter: pcap_compile: syntax error)! Fatal Error, Quitting.. So, this are my questions: Shouldn't snortsam block the PING? Is that DAQ error causing the problem? If so, How can I solve it?

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Help me solve my problem with NPR Media Player

    - by Calcipher
    First of, let me apologize for this getting a bit technical. Several weeks ago, I found that while using NPR's media player (e.g. click on 'Listen to the Show' - this is what I've been using as a test) the stream would suddenly halt after a minute or three. I could not get the stream to restart without reloading the page. Now, I assumed this was an issue with NPR's player and Linux (or just a bug in their stuff in general) so I began to dig, the following is what I have tried to date (please note, the tldr; option is to skip to the latest thing as I think I know what is causing the problem). Note: All testing has been done, for consistency purposes, on a clean install of Chromium with no pluggins running. My machine is Ubuntu 10.10x64. First thing I always try, I disabled all firewall stuff on the system (UFW, default deny all, allow ssh). No change, firewall back up for all additional tests unless otherwise noted. In any case, UFW is stateful, so connections it started on a non-specified on different ports will continue to work. I deleted my ~/.macromeda and ~/.adobe folders, restarted (just to be sure) and tried. Program still froze. I decided the problem might be with my install of flash, so I purged the version I had (and the home folders again). I installed the x64 version of flash from a PPA. This had no effect. I decided that the problem might be with the version of flash, so I purged the x64 version and installed the standard x32 version that comes with Ubuntu. No luck. Back to the x64 version for consistency, I decided to set up a 64-bit mini 'clone' of my system in VirtualBox. I was able to run the media player with no problem. I rsynced (in archive mode) my home directory from my real machine to the virtual machine (with bridged networking, so it was fully visible on the network). I also used a few tricks to install ALL of the same software (and repositories) from the real machine to the virtual machine. I was still able to listen to the player. I decided that the problem was with my install (after all, it had gone through two major version upgrades). As I have /home/ on a separate partition it was easy to reinstall and use the same trick from #6 to have my system up and running again within about an hour. I continue to have issues with the NPR Media Player. By this point the weekend had come. At work, I use a wired connection while at home I use a wireless connection. For some reason I forgot that I was having problems and used the NPR Media Player over the weekend. Low and behold it worked just fine at home on wireless (note: for various reasons, I could not test this on wired at home). Following from #6, I decided that the problem was either something with the network at work or still something with my account. As the latter was easier to test, I created a new account on my system and used that at work. The Media Player worked. At a loss, I decided to watch the traffic with tshark (the text based brother of wireshark) - X's to protect the innocent, I am the XXX.24.200.XXX: sudo tshark -i eth0 -p -t a -R "ip.addr == XXX.24.200.XXX && ip.addr == XXX.166.98.XXX" As you would expect, there were tons and tons of packets, but each and every time the player froze, this is what I got 08:42:20.679200 XXX.166.98.XXX - XXX.24.200.XXX TCP macromedia-fcs 56371 [PSH, ACK] Seq=817686 Ack=6 Win=65535 Len=1448 TSV=495713325 TSER=396467 08:42:20.718602 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396475 TSER=495713325 08:42:21.050183 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713362 TSER=396475 08:42:21.050221 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396508 TSER=495713362 08:42:21.680548 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713425 TSER=396508 08:42:21.680605 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396571 TSER=495713425 08:42:22.910354 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713548 TSER=396571 08:42:22.910400 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396694 TSER=495713548 08:42:25.340458 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713791 TSER=396694 08:42:25.340517 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396937 TSER=495713791 08:42:30.170698 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495714274 TSER=396937 08:42:30.170746 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=397420 TSER=495714274 08:42:39.801738 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495715237 TSER=397420 08:42:39.801784 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=398383 TSER=495715237 08:42:59.032648 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495717160 TSER=398383 08:42:59.032696 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=400306 TSER=495717160 08:43:00.267721 XXX.24.200.XXX - XXX.166.98.XXX TCP 56371 macromedia-fcs [FIN, ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=400430 TSER=495717160 08:43:00.267827 XXX.24.200.XXX - XXX.166.98.XXX TCP 56371 macromedia-fcs [RST, ACK] Seq=7 Ack=819134 Win=65535 Len=0 TSV=400430 TSER=495717160 So, as you can see, my machine is sending out a ZeroWindow packet (which I think means some buffer or another filled up) which causes the Media Player to halt (unfortunately, terminally - no controls on it really do anything anymore). Any ideas, at all, what would cause this? Why only on eth0 under my main account?

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • JQuery and the multiple date selector

    - by David Carter
    Overview I recently needed to build a web page that would allow a user to capture some information and most importantly select multiple dates. This functionality was core to the application and hence had to be easy and quick to do. This is a public facing website so it had to be intuitive and very responsive. On the face of it it didn't seem too hard, I know enough juery to know what it is capable of and I was pretty sure that there would be some plugins that would help speed things along the way. I'm using ASP.Net MVC for this project as I really like the control that it gives you over the generated html and javascript. After years of Web Forms development it makes me feel like a web developer again and puts a smile on my face, that can only be a good thing!   The Calendar The first item that I needed on this page was a calender and I wanted the ability to: have the calendar be always visible select/deselect multiple dates at the same time bind to the select/deselect event so that I could update a seperate listing of the selected dates allow the user to move to another month and still have the calender remember any dates in the previous month I was hoping that there was a jQuery plugin that would meet my requirements and luckily there was! The jQuery datepicker does everything I want and there is quite a bit of documentation on how to use it. It makes use of a javascript date library date.js which I had not come across before but has a number of very useful date utilities that I have used elsewhere in the project. As you can see from the image there still needs to be some styling done! But there will be plenty of time for that later. The calendar clearly shows which dates the user has selected in red and i also make use of an unordered list to show the the selected dates so the user can always clearly see what has been selected even if they move to another month on the calendar. The javascript code that is responsible for listening to events on the calendar and synchronising the list look as follows: <script type="text/javascript">     $(function () {         $('.datepicker').datePicker({ inline: true, selectMultiple: true })         .bind(             'dateSelected',             function (e, selectedDate, $td, state) {                                 var dateInMillisecs = selectedDate.valueOf();                 if (state) { //adding a date                     var newDate = new Date(selectedDate);                     //insert the new item into the correct place in the list                     var listitems = $('#dateList').children('li').get();                     var liToAdd = "<li id='" + dateInMillisecs + "' >" + newDate.toString('ddd dd MMM yyyy') + "</li>";                     var targetIndex = -1;                     for (var i = 0; i < listitems.length; i++) {                         if (dateInMillisecs <= listitems[i].id) {                             targetIndex = i;                             break;                         }                     }                     if (targetIndex < 0) {                         $('#dateList').append(liToAdd);                     }                     else {                         $($('#dateList').children("li")[targetIndex]).before(liToAdd);                     }                 }                 else {//removing a date                     $('ul #' + dateInMillisecs).remove();                 }             }         )     }); When a date is selected on the calendar a function is called with a number of parameters passed to it. The ones I am particularly interested in are selectedDate and state. State tells me whether the user has selected or deselected the date passed in the selectedDate parameter. The <ul> that I am using to show the date has an id of dateList and this is what I will be adding and removing <li> items from. To make things a little more logical for the user I decided that the date should be sorted in chronological order, this means that each time a new date is selected it need to be placed in the correct position in the list. One way to do this would be just to append a new <li> to the list and then sort the whole list. However the approach I took was to get an array of all the items in the list var listitems = ('#dateList').children('li').get(); and then check the value of each item in the array against my new date and as soon as I found the case where the new date was less than the current item remember that position in the list as this is where I would insert it later. To make this work easily I decided to store a numeric representation of each date in the list in the id attribute of each <li> element. Fortunately javascript natively stores dates as the number of milliseconds since 1 Jan 1970. var dateInMillisecs = selectedDate.valueOf(); Please note that this is the value of the date in UTC! I always like to store dates in UTC as I learnt a long time ago that it saves a lot of refactoring at a later date... When I convert the dates back to their original back on the server I will need the UTC offset that was used when calculating the dates, this and how to actually serialise the dates and get them posted back will be the subject of another post.

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Crash in audio resampler with some audio rates - FFMPEG PHP ( Solved! )

    - by Olaf Erlandsen
    i have a problem with this command( FFMPEG PHP ): Command: ffmpeg -i 62f76f050494f0ed6a5997967c00c0c0.wmv -ss 0 -t 99 -y -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 -f flv 62f76f050494f0ed6a5997967c00c0c0.flv Output: FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.19. 0 / 1.19. 0 libswscale 0.11. 0 / 0.11. 0 libpostproc 51. 2. 0 / 51. 2. 0 [asf @ 0xe81670]max_analyze_duration reached Input #0, asf, from '/var/www/resources/tmp/62f76f050494f0ed6a5997967c00c0c0.wmv': Metadata: WMFSDKVersion : 12.0.7601.17514 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:00:50.87, bitrate: 2467 kb/s Stream #0.0: Audio: wmapro, 44100 Hz, stereo, flt, 256 kb/s Stream #0.1: Video: vc1, yuv420p, 950x460 [PAR 1:1 DAR 95:46], 25 fps, 25 tbr, 1k tbn, 25 tbc Output #0, flv, to '/var/www/resources/media/62f76f050494f0ed6a5997967c00c0c0.flv': Metadata: encoder : Lavf52.64.2 Stream #0.0: Video: flv, yuv420p, 950x460 [PAR 1:1 DAR 95:46], q=2-31, 200 kb/s, 1k tbn, 29.97 tbc Stream #0.1: Audio: libmp3lame, 11025 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding frame= 72 fps= 0 q=5.0 size= 0kB time=10.91 bitrate= 0.0kbits/s Multiple frames in a packet from stream 0 Warning, using s16 intermediate sample format for resampling frame= 141 fps=139 q=5.0 size= 103kB time=8.15 bitrate= 103.2kbits/s frame= 220 fps=144 q=5.0 size= 875kB time=10.92 bitrate= 656.6kbits/s frame= 290 fps=143 q=5.0 size= 1525kB time=13.74 bitrate= 909.1kbits/s frame= 356 fps=141 q=5.0 size= 2153kB time=15.99 bitrate=1103.1kbits/s frame= 427 fps=141 q=5.0 size= 2847kB time=18.70 bitrate=1247.0kbits/s frame= 497 fps=141 q=5.0 size= 3771kB time=21.16 bitrate=1460.0kbits/s frame= 575 fps=142 q=5.0 size= 4695kB time=24.61 bitrate=1563.0kbits/s frame= 639 fps=141 q=5.0 size= 5301kB time=26.80 bitrate=1620.2kbits/s frame= 703 fps=139 q=5.0 size= 5829kB time=29.36 bitrate=1626.2kbits/s frame= 774 fps=139 q=5.0 size= 6659kB time=32.39 bitrate=1684.0kbits/s frame= 842 fps=139 q=5.0 size= 7915kB time=35.27 bitrate=1838.6kbits/s frame= 911 fps=139 q=5.0 size= 9011kB time=37.98 bitrate=1943.4kbits/s frame= 975 fps=138 q=5.0 size= 9788kB time=40.59 bitrate=1975.3kbits/s frame= 1041 fps=138 q=5.0 size= 10904kB time=43.83 bitrate=2037.9kbits/s frame= 1115 fps=138 q=5.0 size= 11795kB time=46.24 bitrate=2089.8kbits/s frame= 1183 fps=138 q=5.0 size= 12678kB time=48.74 bitrate=2130.7kbits/s frame= 1247 fps=137 q=5.0 size= 13964kB time=51.36 bitrate=2227.5kbits/s frame= 1271 fps=136 q=5.0 Lsize= 15865kB time=58.86 bitrate=2208.1kbits/s video:15366kB audio:462kB global headers:0kB muxing overhead 0.238956% Problem: Warning, using s16 intermediate sample format for resampling I've also tried changing the parameter From -ar 44100 to -ar 11025 Thanks! Solution: Read this link: http://en.wikipedia.org/wiki/MP3#Bit_rate

    Read the article

  • TFS 2010 SDK: Integrating Twitter with TFS Programmatically

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS API,Integrate Twitter TFS,TFS Programming,ALM,TwitterSharp   Friends at ‘Twitter Sharp’ have created a wonderful .net API for twitter. With this blog post i will try to show you a basic TFS – Twitter integration scenario where i will retrieve the Team Project details programmatically and then publish these details on my twitter page. In future blogs i will be demonstrating how to create a windows service to capture the events raised by TFS and then publishing them in your social eco-system. Download Working Demo: Integrate Twitter - Tfs Programmatically   1. Setting up Twitter API Download Tweet Sharp from => https://github.com/danielcrenna/tweetsharp  Before you can start playing around with this, you will need to register an application on twitter. This is because Twitter uses the OAuth authentication protocol and will not issue an Access token unless your application is registered with them. Go to https://dev.twitter.com/ and register your application   Once you have registered your application, you will need ‘Customer Key’, ‘Customer Secret’, ‘Access Token’, ‘Access Token Secret’ 2. Connecting to Twitter using the Tweet Sharp API Create a new C# windows forms project and add reference to ‘Hammock.ClientProfile’, ‘Newtonsoft.Json’, ‘TweetSharp’ Add the following keys to the App.config (Note – The values for the keys below are in correct and if you try and connect using them then you will get an authorization failure error). Add a new class ‘TwitterProxy’ and use the following code to connect to the TwitterService (Read more about OAuthentication - http://dev.twitter.com/pages/auth) using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Configuration;using TweetSharp; namespace WindowsFormsApplication2{ public class TwitterProxy { private static string _hero; private static string _consumerKey; private static string _consumerSecret; private static string _accessToken; private static string _accessTokenSecret;  public static TwitterService ConnectToTwitter() { _consumerKey = ConfigurationManager.AppSettings["ConsumerKey"]; _consumerSecret = ConfigurationManager.AppSettings["ConsumerSecret"]; _accessToken = ConfigurationManager.AppSettings["AccessToken"]; _accessTokenSecret = ConfigurationManager.AppSettings["AccessTokenSecret"];  return new TwitterService(_consumerKey, _consumerSecret, _accessToken, _accessTokenSecret); } }} Time to Tweet! _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); _twitterService.SendTweet("Hello World"); SendTweet will return the TweetStatus, If you do not get a 200 OK status that means you have failed authentication, please revisit the Access tokens. --RESPONSE: https://api.twitter.com/1/statuses/update.json HTTP/1.1 200 OK X-Transaction: 1308476106-69292-41752 X-Frame-Options: SAMEORIGIN X-Runtime: 0.03040 X-Transaction-Mask: a6183ffa5f44ef11425211f25 Pragma: no-cache X-Access-Level: read-write X-Revision: DEV X-MID: bd8aa0abeccb6efba38bc0a391a73fab98e983ea Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 Content-Type: application/json; charset=utf-8 Date: Sun, 19 Jun 2011 09:35:06 GMT Expires: Tue, 31 Mar 1981 05:00:00 GMT Last-Modified: Sun, 19 Jun 2011 09:35:06 GMT Server: hi Vary: Accept-Encoding Content-Encoding: Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked   3. Integrate with TFS In my blog post Connect to TFS Programmatically i have in depth demonstrated how to connect to TFS using the TFS API. 1: // Update the AppConfig with the URI of the Team Foundation Server you want to connect to, Make sure you have View Team Project Collection Details permissions on the server 2: private static string _myUri = ConfigurationManager.AppSettings["TfsUri"]; 3: private static TwitterService _twitterService = null; 4:   5: private void button1_Click(object sender, EventArgs e) 6: { 7: lblNotes.Text = string.Empty; 8:   9: try 10: { 11: StringBuilder notes = new StringBuilder(); 12:   13: _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); 14:   15: _twitterService.SendTweet("Hello World"); 16:   17: TfsConfigurationServer configurationServer = 18: TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); 19:   20: CatalogNode catalogNode = configurationServer.CatalogNode; 21:   22: ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( 23: new Guid[] { CatalogResourceTypes.ProjectCollection }, 24: false, CatalogQueryOptions.None); 25:   26: // tpc = Team Project Collection 27: foreach (CatalogNode tpcNode in tpcNodes) 28: { 29: Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); 30: TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); 31:   32: notes.AppendFormat("{0} Team Project Collection : {1}{0}", Environment.NewLine, tpc.Name); 33: _twitterService.SendTweet(String.Format("http://Lunartech.codeplex.com - Connecting to Team Project Collection : {0} ", tpc.Name)); 34:   35: // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' 36: var tpNodes = tpcNode.QueryChildren( 37: new Guid[] { CatalogResourceTypes.TeamProject }, 38: false, CatalogQueryOptions.None); 39:   40: foreach (var p in tpNodes) 41: { 42: notes.AppendFormat("{0} Team Project : {1} - {2}{0}", Environment.NewLine, p.Resource.DisplayName,  "This is an open source project hosted on codeplex"); 43: _twitterService.SendTweet(String.Format(" Connected to Team Project: '{0}' – '{1}' ", p.Resource.DisplayName, "This is an open source project hosted on codeplex")); 44: } 45: } 46: notes.AppendFormat("{0} Updates posted on Twitter : {1} {0}", Environment.NewLine, @"http://twitter.com/lunartech1"); 47: lblNotes.Text = notes.ToString(); 48: } 49: catch (Exception ex) 50: { 51: lblError.Text = " Message : " + ex.Message + (ex.InnerException != null ? " Inner Exception : " + ex.InnerException : string.Empty); 52: } 53: }   The extensions you can build integrating TFS and Twitter are incredible!   Share this post :

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • how does openvpn decide which interface to get IP addrs from

    - by bkrupa
    Using ubuntu 10.04 on both ends. We have a client and server machine on the SAME network attempting to make a vpn connection. We use the config files from here and made minimal changes. The server and client start and seem to connect without any trouble. The server looks like: Wed Feb 23 22:13:22 2011 MULTI: multi_create_instance called Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Re-using SSL/TLS context Wed Feb 23 22:13:22 2011 192.168.1.55:47166 LZO compression initialized Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Local Options hash (VER=V4): 'f7df56b8' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Expected Remote Options hash (VER=V4): 'd79ca330' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 TLS: Initial packet from 192.168.1.55:47166, sid=69112e42 5458135b *...* Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Feb 23 22:13:22 2011 192.168.1.55:47166 [client1] Peer Connection Initiated with 192.168.1.55:47166 On the client side the connection looks like: Wed Feb 23 22:20:07 2011 [server] Peer Connection Initiated with [AF_INET]192.168.1.41:1194 Wed Feb 23 22:20:10 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Feb 23 22:20:10 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 10.8.0.4,ping 10,ping-restart 120,ifconfig 10.8.0.50 255.255.255.0' ... Wed Feb 23 22:20:10 2011 /sbin/ifconfig tap0 10.8.0.50 netmask 255.255.255.0 mtu 1500 broadcast 10.8.0.255 Wed Feb 23 22:20:10 2011 Initialization Sequence Completed The openvpn server has been configured to assign ip addresses in the range 10.8.0.* and the client has been given 10.8.0.50. When I run the following nmap from the client: Starting Nmap 5.00 ( http://nmap.org ) at 2011-02-23 22:04 EST Host 10.8.0.50 is up (0.00047s latency). Nmap done: 256 IP addresses (1 host up) scanned in 30.34 seconds Host 192.168.1.1 is up (0.0025s latency). Host 192.168.1.18 is up (0.074s latency). Host 192.168.1.41 is up (0.0024s latency). Host 192.168.1.55 is up (0.00018s latency). Nmap done: 256 IP addresses (4 hosts up) scanned in 6.33 seconds If I run an nmap from the server on 10.8.0.* I get nothing. If the client has two interfaces (wireless and tap device) when you look for a certain ip address, how does it decide which interface to connect on? edit I am trying to set up a vpn so that I can connect to my home network from a remote network. It seems like openvpn is connecting but none of the computers on my home network appear as network machines even after the connection is "Established". Stripped versions of the client and server config files are posted below. Thanks for any help you can offer. server.conf port 1194 proto udp dev tap ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key # This file should be kept secret dh /etc/openvpn/easy-rsa/keys/dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 client.conf client dev tap dev-node tap0901 proto udp remote ********** 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client1.crt key client1.key comp-lzo verb 3 one other thing that might be helpful, I tried to connect using the openvpn gui for windows and the connection stalls out on "obtaining configuration" and the bar just scrolls forever.

    Read the article

  • wireless internet in linux is very very slow... but in windows.... everythnings fine

    - by Cody Acer
    yesterday when i was connecting to our neighbors wifi connection which is the signal strength is below 50%, i cant browse anything... even ping to gateway. 100% packet loss, and sometimes.. i can connect awesomely i can open my facebook account for 15 minutes but after 15min.. connection is extremely slow. but not windows i can surf even the signal str is very poor weird ey??.. root@Emely:~# lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx DMI Bridge [8086:a010] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a011] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller [0380]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a012] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 1 [8086:27d0] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 2 [8086:27d2] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 3 [8086:27d4] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 4 [8086:27d6] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.1 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.2 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.3 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.7 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller [8086:27cc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ehci-pci 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 SATA controller [0106]: Intel Corporation NM10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus [0c05]: Intel Corporation NM10/ICH7 Family SMBus Controller [8086:27da] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel modules: i2c-i801 05:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: bcma-pci-bridge Kernel modules: bcma 09:00.0 Ethernet controller [0200]: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller [11ab:4354] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: sky2 Kernel modules: sky2 root@Emely:~# ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.108/24 brd 192.168.1.255 scope global wlan0 inet6 fe80::21b:b1ff:fea9:ace0/64 scope link valid_lft forever preferred_lft forever root@Emely:~# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff root@Emely:~# rfkill list all 0: samsung-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: samsung-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no 3: phy0: Wireless LAN Soft blocked: no Hard blocked: no is this a wireless driver issue?

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • Streaming desktop with avconv - severe sound issues

    - by Tommy Brunn
    I'm trying to do some live streaming in Ubuntu 12.10, but I'm having some problems with audio. More specifically, the quality is complete garbage and it's at least 10 seconds out of sync with the video. I'm using an excellent guide found here to set up my loopback devices so that I can combine the desktop audio with the microphone input. It seems to work, as I'm able to stream both audio and video to Twitch.tv. But, as I said, the audio quality is terrible. The microphone audio is very, very low, but if I increase it, I get a horrible garbled sound that is absolutely unbearable. Nothing like that is present during VoIP calls or when recording sound alone with the sound recorder, so it's not an issue with the microphone itself. The entire audio stream is also delayed about 10-15 seconds compared to the video stream. I put together an imgur album of my settings. Here is some example output from when I'm streaming: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [x11grab @ 0x162fd80] device: :0.0+570,262 -> display: :0.0 x: 570 y: 262 width: 1280 height: 720 [x11grab @ 0x162fd80] shared memory extension found [x11grab @ 0x162fd80] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0+570,262': Duration: N/A, start: 1353181686.735113, bitrate: 884736 kb/s Stream #0.0: Video: rawvideo, bgra, 1280x720, 884736 kb/s, 30 tbr, 1000k tbn, 30 tbc [alsa @ 0x163fce0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x163fce0] Estimating duration from bitrate, this may be inaccurate Input #1, alsa, from 'pulse': Duration: N/A, start: 1353181686.773841, bitrate: N/A Stream #1.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' [buffer @ 0x1641ec0] w:1280 h:720 pixfmt:bgra [scale @ 0x1642480] w:1280 h:720 fmt:bgra -> w:852 h:480 fmt:yuv420p flags:0x4 [libx264 @ 0x165ae80] VBV maxrate unspecified, assuming CBR [libx264 @ 0x165ae80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x165ae80] profile Main, level 3.1 [libx264 @ 0x165ae80] 264 - core 123 r2189 35cf912 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=0 open_gop=1 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=30 rc=cbr mbtree=1 bitrate=712 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=712 vbv_bufsize=512 nal_hrd=none ip_ratio=1.25 aq=1:1.00 Output #0, flv, to 'rtmp://live.justin.tv/app/live_23011330_Pt1plSRM0z5WVNJ0QmCHvTPmpUnfC4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuv420p, 852x480, q=-1--1, 712 kb/s, 1k tbn, 30 tbc Stream #0.1: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 712 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #1:0 -> #0:1 (pcm_s16le -> libmp3lame) Press ctrl-c to stop encoding frame= 17 fps= 0 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 32 fps= 31 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 40 fps= 23 q=29.0 size= 44kB time=0.03 bitrate=13786.2kbits/s dup=frame= 47 fps= 21 q=31.0 size= 93kB time=2.73 bitrate= 277.7kbits/s dup=0frame= 62 fps= 23 q=29.0 size= 160kB time=3.23 bitrate= 406.2kbits/s dup=0frame= 77 fps= 24 q=23.0 size= 209kB time=3.71 bitrate= 462.5kbits/s dup=0frame= 92 fps= 25 q=20.0 size= 267kB time=4.91 bitrate= 445.2kbits/s dup=0frame= 107 fps= 25 q=20.0 size= 318kB time=5.41 bitrate= 482.1kbits/s dup=0frame= 123 fps= 26 q=18.0 size= 368kB time=5.96 bitrate= 505.7kbits/s dup=0frame= 139 fps= 26 q=16.0 size= 419kB time=6.48 bitrate= 529.7kbits/s dup=0frame= 155 fps= 27 q=15.0 size= 473kB time=7.00 bitrate= 553.6kbits/s dup=0frame= 170 fps= 27 q=14.0 size= 525kB time=7.52 bitrate= 571.7kbits/s dup=0 frame= 180 fps= 25 q=-1.0 Lsize= 652kB time=7.97 bitrate= 670.0kbits/s dup=0 drop=32 //Here I stop the streaming video:531kB audio:112kB global headers:0kB muxing overhead 1.345945% [libx264 @ 0x165ae80] frame I:1 Avg QP:30.43 size: 39748 [libx264 @ 0x165ae80] frame P:45 Avg QP:11.37 size: 11110 [libx264 @ 0x165ae80] frame B:134 Avg QP:15.93 size: 27 [libx264 @ 0x165ae80] consecutive B-frames: 0.6% 0.0% 1.7% 97.8% [libx264 @ 0x165ae80] mb I I16..4: 7.3% 0.0% 92.7% [libx264 @ 0x165ae80] mb P I16..4: 0.1% 0.0% 0.1% P16..4: 49.1% 1.2% 2.1% 0.0% 0.0% skip:47.4% [libx264 @ 0x165ae80] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:99.9% L0:42.5% L1:56.9% BI: 0.6% [libx264 @ 0x165ae80] coded y,uvDC,uvAC intra: 82.3% 87.4% 71.9% inter: 7.1% 8.4% 7.0% [libx264 @ 0x165ae80] i16 v,h,dc,p: 27% 29% 16% 28% [libx264 @ 0x165ae80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 21% 14% 8% 8% 8% 7% 5% 7% [libx264 @ 0x165ae80] i8c dc,h,v,p: 47% 22% 20% 11% [libx264 @ 0x165ae80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x165ae80] ref P L0: 96.4% 3.6% [libx264 @ 0x165ae80] kb/s:474.19 Received signal 2: terminating. Any ideas on how I can resolve this? The video delay is perfectly acceptable, so I wouldn't think that it's a network issue that's causing the delay in the audio. Any help would be appreciated.

    Read the article

  • Create Advanced Panoramas with Microsoft Image Composite Editor

    - by Matthew Guay
    Do you enjoy making panoramas with your pictures, but want more features than tools like Live Photo Gallery offer?  Here’s how you can create amazing panoramas for free with the Microsoft Image Composite Editor. Yesterday we took a look at creating panoramic photos in Windows Live Photo Gallery. Today we take a look at a free tool from Microsoft that will give you more advanced features to create your own masterpiece. Getting Started Download Microsoft Image Composite Editor from Microsoft Research (link below), and install as normal.  Note that there are separate version for 32 & 64-bit editions of Windows, so make sure to download the correct one for your computer. Once it’s installed, you can proceed to create awesome panoramas and extremely large image combinations with it.  Microsoft Image Composite Editor integrates with Live Photo Gallery, so you can create more advanced panoramic pictures directly.  Select the pictures you want to combine, click Extras in the menu bar, and select Create Image Composite. You can also create a photo stitch directly from Explorer.  Select the pictures you want to combine, right-click, and select Stitch Images… Or, simply launch the Image Composite Editor itself and drag your pictures into its editor.  Either way you start a image composition, the program will automatically analyze and combine your images.  This application is optimized for multiple cores, and we found it much faster than other panorama tools such as Live Photo Gallery. Within seconds, you’ll see your panorama in the top preview pane. From the bottom of the window, you can choose a different camera motion which will change how the program stitches the pictures together.  You can also quickly crop the picture to the size you want, or use Automatic Crop to have the program select the maximum area with a continuous picture.   Here’s how our panorama looked when we switched the Camera Motion to Planar Motion 2. But, the real tweaking comes in when you adjust the panorama’s projection and orientation.  Click the box button at the top to change these settings. The panorama is now overlaid with a grid, and you can drag the corners and edges of the panorama to change its shape. Or, from the Projection button at the top, you can choose different projection modes. Here we’ve chosen Cylinder (Vertical), which entirely removed the warp on the walls in the image.  You can pan around the image, and get the part you find most important in the center.  Click the Apply button on the top when you’re finished making changes, or click Revert if you want to switch to the default view settings. Once you’ve finished your masterpiece, you can export it easily to common photo formats from the Export panel on the bottom.  You can choose to scale the image or set it to a maximum width and height as well.  Click Export to disk to save the photo to your computer, or select Publish to Photosynth to post your panorama online. Alternately, from the File menu you can choose to save the panorama as .spj file.  This preserves all of your settings in the Image Composite Editor so you can edit it more in the future if you wish.   Conclusion Whether you’re trying to capture the inside of a building or a tall tree, the extra tools in Microsoft Image Composite Editor let you make nicer panoramas than you ever thought possible.  We found the final results surprisingly accurate to the real buildings and objects, especially after tweaking the projection modes.  This tool can be both fun and useful, so give it a try and let us know what you’ve found it useful for. Works with 32 & 64-bit versions of XP, Vista, and Windows 7 Link Download Microsoft Image Composite Editor Similar Articles Productive Geek Tips Change or Set the Greasemonkey Script Editor in FirefoxNew Vista Syntax for Opening Control Panel Items from the Command-lineTune Your ClearType Font Settings in Windows VistaChange the Default Editor From Nano on Ubuntu LinuxMake MSE Create a Restore Point Before Cleaning Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >