Search Results

Search found 11509 results on 461 pages for 'description logic'.

Page 309/461 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • AS11 Oracle B2B Sync Support - Series 1

    - by sinkarbabu.kirubanithi
    Synchronous message support has been enabled in Oracle B2B 11G. This would help customers to send the business message and receive the corresponding business response synchronously. We would like to keep this blog entry as three part series, first one would carry Oracle B2B configuration related details followed by 'how it can be consumed and utilized in an enterprise' using composites backed model. And, the last one would talk about more sophisticated seeded support built on Oracle B2B platform (Note: the last one is still in description phase and ETA hasn't been finalized yet). Details: In an effort to enable synchronous processing in Oracle B2B, we provided a platform using the existing 'callout' mechanism. In this case, we expect the 'callout' attached to the agreement to deliver incoming business message (inbound) to back-end application and get the corresponding business response from back-end and deliver it to Oracle B2B as its output. The output of 'callout' would be processed as outbound message and the same will be attached as a response for the inbound message. Requirements to enable Sync Support: Outbound side: Outbound Agreement - to send business message request Inbound Agreement - to receive business message response Inbound side: Inbound Agreement - to receive business message request Outbound Agreement - to send business message response Agreement Level Callout - to deliver the inbound request to back-end and get the corresponding business response This feature is supported only for HTTP based transport to exchange messages with Trading Partners. One may initiate the outbound message (enqueue) using any of the available Transports in Oracle B2B. Configuration: Outbound side: Please add "syncresponse=true" as "Additional Transport Header" parameter for remote Trading Partner's HTTP delivery channel configuration. This would enable Oracle B2B to process the HTTP response as inbound message and deliver the same to back-end application. All other configuration related to Agreement and Document setup remain same. Inbound side: There is no change in Agreement and Document setup. To enable "Sync Support", you need to build a 'callout' that takes the responsibility of delivering inbound message to back-end and get the corresponding business response from the back-end and attach the same as its output. Oracle B2B treats the output of 'callout' as outbound message and deliver it to Trading Partner as synchronous HTTP response. The requests that needs to processed synchronously should be received by "syncreceiver" (http://:/b2b/syncreceiver) endpoint in Oracle B2B. Exception Handling: Existing Oracle B2B exception handling applies to this use case as well. Here's the sample callout, SampleSyncCallout.java We will get you second part that talks about 'SOA composites' backed model to design the "Sync Support" use case from back-end to Trading Partners, stay tuned.

    Read the article

  • Querying Networking Statistics: dlstat(1M)

    - by user12612042
    Oracle Solaris 11 took another big leap forward in networking technologies providing a reliable, secure and scalable infrastructure to meet the growing needs of today's datacenter implementations. Oracle Solaris 11 introduced a new and powerful network stack architecture, also known as Project Crossbow. From Solaris 11 onwards, we introduced a command line tool viz. dlstat(1M) to query network statistics. dlstat (for datalink statistics) is a statistics querying counterpart for dladm(1M) - the datalink administration tool. The tool is very easy to get started. Just type dlstat on a shell prompt on Solaris 11 (or later). For example,: # dlstat LINK IPKTS RBYTES OPKTS OBYTES net0 834.11K 145.91M 575.19K 104.24M net1 7.87K 2.04M 0 0 In this example, the system has two datalinks net0 and net1. The output columns denote input packets/bytes as well as output packets/bytes. The numbers are abbreviated in xxx.xxUnit format. However, one could get the actual counts by simply running dlstat -u R (R for raw): # dlstat -u R LINK IPKTS RBYTES OPKTS OBYTES net0 834271 145931244 575246 104242934 net1 7869 2036958 0 0 In addition, dlstat also supports various subcommands dlstat help The following subcommands are supported: Stats : show-aggr show-ether show-link show-phys show-bridge For more info, run: dlstat help {default|} I will only describe couple of interesting subcommands/options here. For a comprehensive description of all the dlstat subcommands refer dlstat's official manual . For NICs that support multiple rings (e.g. ixgbe), dlstat show-phys -r allows us to query per Rx ring statistics. For example: dlstat show-phys -r net4 LINK TYPE INDEX IPKTS RBYTES net4 rx 0 0 0 net4 rx 1 0 0 net4 rx 2 0 0 net4 rx 3 0 0 net4 rx 4 0 0 net4 rx 5 0 0 net4 rx 6 0 0 net4 rx 7 0 0 In this case, net4 is just a vanity name for an ixgbe datalink. This view is especially useful if one wants to look at the network traffic spread across all the available rings. Furthermore, any of the dlstat commands could be run with -i option to periodically query and display stats. For example, running dlstat show-phys -r net4 -i 5 will emit per Rx ring stats every 5 seconds. This is especially useful while analyzing a live system. Similarly, dlstat show-phys -t could be used to query per Tx ring stats. -r and -t could also be combined as dlstat show-phys -rt to query both Rx as well as Tx stats at the same time. Finally, there is also a quick way to dump ALL the stats. Just run dlstat -A. You probably want to redirect this output to a file because you are going to get a whole load of stats :-).

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • Grouping data in LINQ with the help of group keyword

    - by vik20000in
    While working with any kind of advanced query grouping is a very important factor. Grouping helps in executing special function like sum, max average etc to be performed on certain groups of data inside the date result set. Grouping is done with the help of the Group method. Below is an example of the basic group functionality.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };         var numberGroups =         from num in numbers         group num by num % 5 into numGroup         select new { Remainder = numGroup.Key, Numbers = numGroup };  In the above example we have grouped the values based on the reminder left over when divided by 5. First we are grouping the values based on the reminder when divided by 5 into the numgroup variable.  numGroup.Key gives the value of the key on which the grouping has been applied. And the numGroup itself contains all the records that are contained in that group. Below is another example to explain the same. string[] words = { "blueberry", "abacus", "banana", "apple", "cheese" };         var wordGroups =         from num in words         group num by num[0] into grp         select new { FirstLetter = grp.Key, Words = grp }; In the above example we are grouping the value with the first character of the string (num[0]). Just like the order operator the group by clause also allows us to write our own logic for the Equal comparison (That means we can group Item by ignoring case also by writing out own implementation). For this we need to pass an object that implements the IEqualityComparer<string> interface. Below is an example. public class AnagramEqualityComparer : IEqualityComparer<string> {     public bool Equals(string x, string y) {         return getCanonicalString(x) == getCanonicalString(y);     }      public int GetHashCode(string obj) {         return getCanonicalString(obj).GetHashCode();     }         private string getCanonicalString(string word) {         char[] wordChars = word.ToCharArray();         Array.Sort<char>(wordChars);         return new string(wordChars);     } }  string[] anagrams = {"from   ", " salt", " earn", "  last   ", " near "}; var orderGroups = anagrams.GroupBy(w => w.Trim(), new AnagramEqualityComparer()); Vikram  

    Read the article

  • Oracle Enterprise Manager 11g is Here!

    - by chung.wu
    We hope that you enjoyed the launch event. If you missed it, you may still watch it via our on demand webcast, which is being produced and will be posted very shortly. 11gR1 is a major release of Oracle Enterprise Manager, and as one would expect from a big release, there are many new capabilities that appeal to a broad set of audience. Before going into the laundry list of new features, let's talk about the key themes for this release to put things in perspective. First, this release is about Business Driven Application Management. The traditional paradigm of component centric systems management simply cannot satisfy the management needs of modern distributed applications, as they do not provide adequate visibility of whether these applications are truly meeting the service level expectations of the business users. Business Driven Application Management helps IT manage applications according to the needs of the business users so that valuable IT resources can be better focused to help deliver better business results. To support Business Driven Application Management, 11gR1 builds on the work that we started in 10g to provide better support for user experience management. This capability helps IT better understand how users use applications and the experience that the applications provide so that IT can take actions to help end users get their work done more effectively. In addition, this release also delivers improved business transaction management capabilities to make it faster and easier to understand and troubleshoot transaction problems that impact end user experience. Second, this release includes strengthened Integrated Application-to-Disk Management. Every component of an application environment, from the application logic to the application server, to database, host machines and storage devices, etc... can affect end user experience. After user experience improvement needs are identified, IT needs tools that can be used do deep dive diagnostics for each of the application environment component, analyze configurations and deploy changes. Enterprise Manager 11gR1 extends coverage of key application environment components to include full support for Oracle Database 11gR2, Exadata V2, and Fusion Middleware 11g. For composite and Java application management, two key pieces of technologies, JVM Diagnostic and Composite Application Monitoring and Modeler, are now fully integrated into Enterprise Manager so there is no need to install and maintain separate tools. In addition, we have delivered the first set of integration between Enterprise Manager Grid Control and Enterprise Manager Ops Center so that hardware level events can be centrally monitored via Grid Control. Finally, this release delivers Integrated Systems Management and Support for customers of Oracle technologies. Traditionally, systems management tools and tech support were separate silos. When problems occur, administrators used internally deployed tools to try to solve the problems themselves. If they couldn't fix the problems, then they would use some sort of support website to get help from the vendor's support staff. Oracle Enterprise Manager 11g integrates problem diagnostic and remediation workflow. Administrators can use Oracle Enterprise Manager's various diagnostic tools to begin the troubleshooting process. They can also use the integrated access to My Oracle Support to look up solutions and download software patches. If further help is needed, administrators can open service requests from right within Oracle Enterprise Manager and track status update. Oracle's support staff, using Enterprise Manager's configuration management capabilities, can collect important configuration information about customer environments in order to expedite problem resolution. This tight integration between Oracle Enterprise Manager and My Oracle Support helps Oracle customers achieve a Superior Ownership Experience for their Oracle products. So there you have it. This is a brief 50,000 feet overview of Oracle Enterprise Manager 11g. We know you are hungry for the details. We are going to write about it in the coming days and weeks. For those of you that absolutely can't wait to find out more, you may download our software to try it out today. In fact, for the first time ever, the initial release of Oracle Enterprise Manager is available for both 32 and 64 bit Linux. Additional O/S ports will arrive in the coming weeks. Please stay tuned on the Oracle Enterprise Manager blog for additional updates.

    Read the article

  • How do you install a USB CD Rom drive?

    - by Matt Allen
    Hello, I recently purchased a USB CD ROM drive, but I don't know how to get it to work with my computer which runs Ubuntu 10.04. http://www.amazon.com/gp/product/B00303H908/ref=oss_product When I issue the lsusb command, it shows up as: Bus 002 Device 016: ID 05e3:0701 Genesys Logic, Inc. USB 2.0 IDE Adapter The computer doesn't recognize it automatically. How can I get this drive to show up as an actual drive on my computer? If this particular drive can't handle Linux, can you recommended one which can and provide a link to it so I can purchase it? Thanks! Update: I was asked by Scaine to run a command and report back with the output: joe@joe-laptop:~$ tail -f /var/log/kern.log Dec 29 12:51:35 joe-laptop kernel: [103190.551437] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.551446] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.551463] end_request: I/O error, dev sr1, sector 0 Dec 29 12:51:35 joe-laptop kernel: [103190.877542] sr 7:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 12:51:35 joe-laptop kernel: [103190.877551] sr 7:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 12:51:35 joe-laptop kernel: [103190.877559] Info fld=0x0, ILI Dec 29 12:51:35 joe-laptop kernel: [103190.877562] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.877572] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.877588] end_request: I/O error, dev sr1, sector 0 Dec 29 13:08:46 joe-laptop kernel: [104221.558911] usb 2-2.2: USB disconnect, address 16 Then when I plugged the drive back into the computer, I got: Dec 29 13:10:29 joe-laptop kernel: [104324.668320] usb 2-2.2: new high speed USB device using ehci_hcd and address 17 Dec 29 13:10:29 joe-laptop kernel: [104324.761702] usb 2-2.2: configuration #1 chosen from 1 choice Dec 29 13:10:29 joe-laptop kernel: [104324.762700] scsi8 : SCSI emulation for USB Mass Storage devices Dec 29 13:10:29 joe-laptop kernel: [104324.762935] usb-storage: device found at 17 Dec 29 13:10:29 joe-laptop kernel: [104324.762938] usb-storage: waiting for device to settle before scanning Dec 29 13:10:34 joe-laptop kernel: [104329.760521] usb-storage: device scan complete Dec 29 13:10:34 joe-laptop kernel: [104329.761344] scsi 8:0:0:0: CD-ROM TEAC CD-224E 1.7A PQ: 0 ANSI: 0 CCS Dec 29 13:10:34 joe-laptop kernel: [104329.767425] sr1: scsi3-mmc drive: 24x/24x cd/rw xa/form2 cdda tray Dec 29 13:10:34 joe-laptop kernel: [104329.767612] sr 8:0:0:0: Attached scsi CD-ROM sr1 Dec 29 13:10:34 joe-laptop kernel: [104329.767720] sr 8:0:0:0: Attached scsi generic sg2 type 5 Dec 29 13:10:34 joe-laptop kernel: [104330.141060] sr 8:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 13:10:34 joe-laptop kernel: [104330.141069] sr 8:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 13:10:34 joe-laptop kernel: [104330.141077] Info fld=0x0, ILI Dec 29 13:10:34 joe-laptop kernel: [104330.141081] sr 8:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 13:10:34 joe-laptop kernel: [104330.141090] sr 8:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 13:10:34 joe-laptop kernel: [104330.141106] end_request: I/O error, dev sr1, sector 0 Dec 29 13:10:34 joe-laptop kernel: [104330.141113] __ratelimit: 18 callbacks suppressed There was more output than this (the number of lines started growing after the drive was plugged back in, and kept growing), but this is the first few lines.

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • Getting a 'base' Domain from a Domain

    - by Rick Strahl
    Here's a simple one: How do you reliably get the base domain from full domain name or URI? Specifically I've run into this scenario in a few recent applications when creating the Forms Auth Cookie in my ASP.NET applications where I explicitly need to force the domain name to the common base domain. So, www.west-wind.com, store.west-wind.com, west-wind.com, dev.west-wind.com all should return west-wind.com. Here's the code where I need to use this type of logic for issuing an AuthTicket explicitly:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); // write out a domain cookie cookie.Domain = Request.Url.GetBaseDomain(); HttpContext.Response.Cookies.Add(cookie); } Now unfortunately there's no Uri.GetBaseDomain() method unfortunately, as I was surprised to find out. So I ended up creating one:public static class NetworkUtils { /// <summary> /// Retrieves a base domain name from a full domain name. /// For example: www.west-wind.com produces west-wind.com /// </summary> /// <param name="domainName">Dns Domain name as a string</param> /// <returns></returns> public static string GetBaseDomain(string domainName) { var tokens = domainName.Split('.'); // only split 3 segments like www.west-wind.com if (tokens == null || tokens.Length != 3) return domainName; var tok = new List<string>(tokens); var remove = tokens.Length - 2; tok.RemoveRange(0, remove); return tok[0] + "." + tok[1]; ; } /// <summary> /// Returns the base domain from a domain name /// Example: http://www.west-wind.com returns west-wind.com /// </summary> /// <param name="uri"></param> /// <returns></returns> public static string GetBaseDomain(this Uri uri) { if (uri.HostNameType == UriHostNameType.Dns) return GetBaseDomain(uri.DnsSafeHost); return uri.Host; } } I've had a need for this so frequently it warranted a couple of helpers. The second Uri helper is an Extension method to the Uri class, which is what's used the in the first code sample. This is the preferred way to call this since the URI class can differentiate between Dns names and IP Addresses. If you use the first string based version there's a little more guessing going on if a URL is an IP Address. There are a couple of small twists in dealing with 'domain names'. When passing a string only there's a possibility to not actually pass domain name, but end up passing an IP address, so the code explicitly checks for three domain segments (can there be more than 3?). IP4 Addresses have 4 and IP6 have none so they'll fall through. Then there are things like localhost or a NetBios machine name which also come back on URL strings, but also shouldn't be handled. Anyway, small thing but maybe somebody else will find this useful.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Networking   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to upgrade boost lib using apt-get?

    - by sam
    I use ubuntu 11.04. My boost version: sam@sam:~/code/ros/pcl$ apt-cache showpkg libboost-all-dev Package: libboost-all-dev Versions: 1.42.0.1ubuntu1 (/var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages MD5: 72efad05a3c79394c125b79e1d4eb3a7 Reverse Depends: libvtk5-dev,libboost-all-dev libfeel++-dev,libboost-all-dev Dependencies: 1.42.0.1ubuntu1 - libboost-dev (0 (null)) libboost-date-time-dev (0 (null)) libboost-filesystem-dev (0 (null)) libboost-graph-dev (0 (null)) libboost-iostreams-dev (0 (null)) libboost-math-dev (0 (null)) libboost-program-options-dev (0 (null)) libboost-python-dev (0 (null)) libboost-regex-dev (0 (null)) libboost-serialization-dev (0 (null)) libboost-signals-dev (0 (null)) libboost-system-dev (0 (null)) libboost-test-dev (0 (null)) libboost-thread-dev (0 (null)) libboost-wave-dev (0 (null)) Provides: 1.42.0.1ubuntu1 - Reverse Provides: sam@sam:~/code/ros/pcl$ How to upgrade boost to 1.44+ by using apt tools? Thank you~ When I run apt-add-repository,it shows: sam@sam:~/code/ros/pcl$ sudo apt-add-repository ppa:timklingt/ppa Error reading https://launchpad.net/api/1.0/~timklingt/+archive/ppa: GnuTLS recv error (-9): A TLS packet with unexpected length was received. sam@sam:~/code/ros/pcl$ How to fix it? Thank you~ I try to install libboost1.46-all-dev: sam@sam:~/code/ros/pcl$ sudo apt-get install libboost1.46-all-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libboost1.46-all-dev : Depends: libboost1.46-dev but it is not going to be installed Depends: libboost-date-time1.46-dev but it is not going to be installed Depends: libboost-filesystem1.46-dev but it is not going to be installed Depends: libboost-graph1.46-dev but it is not going to be installed Depends: libboost-iostreams1.46-dev but it is not going to be installed Depends: libboost-math1.46-dev but it is not going to be installed Depends: libboost-program-options1.46-dev but it is not going to be installed Depends: libboost-python1.46-dev but it is not going to be installed Depends: libboost-regex1.46-dev but it is not going to be installed Depends: libboost-serialization1.46-dev but it is not going to be installed Depends: libboost-signals1.46-dev but it is not going to be installed Depends: libboost-system1.46-dev but it is not going to be installed Depends: libboost-test1.46-dev but it is not going to be installed Depends: libboost-thread1.46-dev but it is not going to be installed Depends: libboost-wave1.46-dev but it is not going to be installed E: Broken packages sam@sam:~/code/ros/pcl$ What's these error means? And how to solve it? Thank you~

    Read the article

  • How to (un)dock IBM Thinkpad X41 from X4 Dock(ing station) successfully?

    - by nutty about natty
    I'd like to start using my docking station again; however, it still doesn't work as it should, see the following bug descriptions (with special focus on Thinkpad X41 & the X4 Dock). Given that it still doesn't work (effective April 2012), my hope is fading that it will start working all of a sudden with Precise Pangolin at the end of the month. This issue is VERY important to me and I would be MOST grateful to anyone being able to sieve through the following links (some of which are actually quite recent) and translate their meaning into reliable and concrete simple (?) steps. I've read briefly about hal and udev, and can imagine that they are somewhat related to this, see links below. I don't want to fire at random. I don't want to tinker around with bash scripts if avoidable... Problem description (more or less ;-) Pressing the undock button on a "ThinkPad X4 Dock" with a ThinkPad X40 does not cause any udev events. And the lights on the dock never change to indicate it is safe to undock. and IBM Thinkpad X41 & docking station no joy :-( ... when pressing the blue undock button on the docking station: - The screen goes blank (with backlight remaining on), - with some SSD/HDD activity; - ctrl alt del causes a shut down after ... seconds, indicating that the system itself hasn't "crashed" but is still (somewhat ?) responsive. and With recent distributions, docking and undocking should function out of the box. You can monitor this by running # udevadm monitor and when you dock or press the undock button you should see a flurry of events. There are some issues though: No event on undock. - In some cases you may not get any events on undock. This is due to the ACPI dock drivers only registering the first logical Dock port they encounter and in some rare cases there may be more then one, such as on a ThinkPad X40 with ThinkPad X4 Dock. Patches are available, and are merged in 2.6.34. Now, if patches are available and merged into 2.6.34 - why isn't (un)docking simply working / fully supported in the latest version of Natty (which to my humble understanding has surpassed kernel version 2.6.34 a while ago)? More relevant links: ThinkPad X41 Docking Station issues and [HOWTO] Run scripts for laptop lid open/close and dock/undock events and finally Symptoms corrected by the latest BIOS Update - ThinkPad X41 - (Fix) USB devices connected to UltraBase X4 or ThinkPad X4 Dock may not be recognized in Boot Menu by pressing F12 during POST. Thanks!!

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • View Your Google Calendar in Outlook 2010

    - by Mysticgeek
    Google Calendar is a great way to share appointments, and synchronize your schedule with others. Here we show you how to view your Google Calendar in Outlook 2010 too. Google Calendar Log into the Google Calendar and under My Calendars click on Settings. Now click on the calendar you want to view in Outlook. Scroll down the page and click on the ICAL button from the Private Address section, or Calendar Address if it’s a public calendar…then copy the address to your clipboard. Outlook 2010 Open up your Outlook calendar, click the Home tab on the Ribbon, and under Manage Calendars click on Open Calendar \ From Internet… Now enter the link location into the New Internet Calendar field then click OK. Click Yes to the dialog box that comes up verifying you want to subscribe to it.   If you want more subscription options click on the Advanced button. Here you can name the folder, type in a description, and choose if you want to download attachments. That is all there is to it! Now you will be able to view your Google Calendar in Outlook 2010. You’ll also be able to view your local computer and the Google Calendar side by side… Keep in mind that this only gives you the ability to view the Google Calendar…it’s read-only. Any changes you make on the Google Calendar site will show up when you do a send/receive. If live out of Outlook during the day, you might want the ability to view what is going on with your Google Calendar(s) as well. If you’re an Outlook 2007 user, check out our article on how to view your Google Calendar in Outlook 2007. Similar Articles Productive Geek Tips View Your Google Calendar in Outlook 2007Overlay Calendars in Outlook 2007 (like Google Calendar does)Sync Your Outlook and Google Calendar with Google Calendar SyncDisplay your Google Calendar in Windows CalendarEasily Add All Holidays To The Calendar in Outlook 2003 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Create More Bookmark Toolbars in Firefox Easily Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7

    Read the article

  • Getting Current with Visual Studio 2010 for Web Developers

    - by plitwin
    I don't know about you, but I find it kind of crazy at times figuring out if I have the latest of everything there is for the Visual Studio 2010 developer from Microsoft. (This does not include any third-party components, just recommended updates from Microsoft.) And the be honest, the msn.microsoft.com and asp.net sites are not that helpful in figuring this out.In an effort to help, I have enumerated here what the latest VS 2010 setup should include, complete with download links. When you install everything here, you will be able to develop ASP.NET 4.0 Web Forms and ASP.NET MVC 3 applications and web sites in addition to the other stuff your version of Visual Studio supports (e.g., Silverlight, WPF, etc.). These downloads will also include NuGet and the Entity Framework 4.1, so there is no need to download this software separately.Visual Studio 2010. First of all, you need to purchase and install Visual Studio 2010 itself. For the free Express version, you can download it from Visual Web Developer 2010 ExpressVisual Studio Service Pack 1 (released Spring 2011).This is a must-have download that fixes a bunch of bugs and a number of enhancements too including preliminary support for HTML5 and CSS3. See #4 below for better support of these web technologies. Download and install from VS 2010 SP1 download page. You can find details on the features of the service pack here. ASP.NET MVC3 Tools Update (released Spring 2011)If you are using ASP.NET MVC 3, then you should also download install this update for Visual Studio from ASP.NET MVC3 Tools Update download page. This update improves Visual Studio's support for MVC 3, including better scaffolding, NuGet, Entity Framework 4.1, and more. A good overview of the updates can be found in Phil Haack's blog post.Web Standards Update for Microsoft Visual Studio 2010 SP1 (released June 2011)This is an update to VS 2010 SP1 that "brings VS 2010 intellisense & validation as close to W3C specification as we could get via means of an extension". Download and install from Web Standards Update download page. A good description of the changes can be found in the Visual Web Developer Team blog post.Note: I don't control these download pages, so it is possible they will change. If so, I will do my best to update these links. This information was current as of June 24, 2011.

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014

    - by Kathryn Perry
    A Guest Post by Laura Vogel, Director, Oracle Marketing Cloud Events (pictured left) Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee SessionDell’s Hayden Mugford will speak on "The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party!We’re hosting CX Central Fest: a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are integrated across CX Target AudienceSession content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. OutcomesCustomers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways Resources At a Glance Register Now Track Site—View Marketing Sessions 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Focus on Session Doc Downloadable Justification Email OpenWorld is a fabulous way for you to see all that Oracle Marketing Cloud has to offer. Register today.

    Read the article

  • Adding AjaxOnly Filter in ASP.NET Web API

    - by imran_ku07
            Introduction:                     Currently, ASP.NET MVC 4, ASP.NET Web API and ASP.NET Single Page Application are the hottest topics in ASP.NET community. Specifically, lot of developers loving the inclusion of ASP.NET Web API in ASP.NET MVC. ASP.NET Web API makes it very simple to build HTTP RESTful services, which can be easily consumed from desktop/mobile browsers, silverlight/flash applications and many different types of clients. Client side Ajax may be a very important consumer for various service providers. Sometimes, some HTTP service providers may need some(or all) of thier services can only be accessed from Ajax. In this article, I will show you how to implement AjaxOnly filter in ASP.NET Web API application.         Description:                     First of all you need to create a new ASP.NET MVC 4(Web API) application. Then, create a new AjaxOnly.cs file and add the following lines in this file, public class AjaxOnlyAttribute : System.Web.Http.Filters.ActionFilterAttribute { public override void OnActionExecuting(System.Web.Http.Controllers.HttpActionContext actionContext) { var request = actionContext.Request; var headers = request.Headers; if (!headers.Contains("X-Requested-With") || headers.GetValues("X-Requested-With").FirstOrDefault() != "XMLHttpRequest") actionContext.Response = request.CreateResponse(HttpStatusCode.NotFound); } }                     This is an action filter which simply checks X-Requested-With header in request with value XMLHttpRequest. If X-Requested-With header is not presant in request or this header value is not XMLHttpRequest then the filter will return 404(NotFound) response to the client.                      Now just register this filter, [AjaxOnly] public string GET(string input)                     You can also register this filter globally, if your Web API application is only targeted for Ajax consumer.         Summary:                       ASP.NET WEB API provide a framework for building RESTful services. Sometimes, you may need your certain API services can only be accessed from Ajax. In this article, I showed you how to add AjaxOnly action filter in ASP.NET Web API. Hopefully you will enjoy this article too.

    Read the article

  • Using a WCF Message Inspector to extend AppFabric Monitoring

    - by Shawn Cicoria
    I read through Ron Jacobs post on Monitoring WCF Data Services with AppFabric http://blogs.msdn.com/b/endpoint/archive/2010/06/09/tracking-wcf-data-services-with-windows-server-appfabric.aspx What is immediately striking are 2 things – it’s so easy to get monitoring data into a viewer (AppFabric Dashboard) w/ very little work.  And the 2nd thing is, why can’t this be a WCF message inspector on the dispatch side. So, I took the base class WCFUserEventProvider that’s located in the WCF/WF samples [1] in the following path, \WF_WCF_Samples\WCF\Basic\Management\AnalyticTraceExtensibility\CS\WCFAnalyticTracingExtensibility\  and then created a few classes that project the injection as a IEndPointBehavior There are just 3 classes to drive injection of the inspector at runtime via config: IDispatchMessageInspector implementation BehaviorExtensionElement implementation IEndpointBehavior implementation The full source code is below with a link to the solution file here: [Solution File] using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ServiceModel.Dispatcher; using System.ServiceModel.Channels; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Description; using Microsoft.Samples.WCFAnalyticTracingExtensibility; namespace Fabrikam.Services { public class AppFabricE2EInspector : IDispatchMessageInspector { static WCFUserEventProvider evntProvider = null; static AppFabricE2EInspector() { evntProvider = new WCFUserEventProvider(); } public object AfterReceiveRequest( ref Message request, IClientChannel channel, InstanceContext instanceContext) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("start", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); return null; } public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("end", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); } } public class AppFabricE2EBehaviorElement : BehaviorExtensionElement { #region BehaviorExtensionElement /// <summary> /// Gets the type of behavior. /// </summary> /// <value></value> /// <returns>The type that implements the end point behavior<see cref="T:System.Type"/>.</returns> public override Type BehaviorType { get { return typeof(AppFabricE2EEndpointBehavior); } } /// <summary> /// Creates a behavior extension based on the current configuration settings. /// </summary> /// <returns>The behavior extension.</returns> protected override object CreateBehavior() { return new AppFabricE2EEndpointBehavior(); } #endregion BehaviorExtensionElement } public class AppFabricE2EEndpointBehavior : IEndpointBehavior //, IServiceBehavior { #region IEndpointBehavior public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) {} public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { throw new NotImplementedException(); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { endpointDispatcher.DispatchRuntime.MessageInspectors.Add(new AppFabricE2EInspector()); } public void Validate(ServiceEndpoint endpoint) { ; } #endregion IEndpointBehavior } }     [1] http://www.microsoft.com/downloads/details.aspx?FamilyID=35ec8682-d5fd-4bc3-a51a-d8ad115a8792&displaylang=en

    Read the article

  • 3 Day Level 400 SQL Tuning Workshop 15 March in London, early bird and referral offer

    - by sqlworkshops
    I want to inform you that we have organized the "3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop" in London, United Kingdom during March 15-17, 2011.This is a truly level 400 hands-on workshop and you can find the Agenda, Prerequisite, Goal of the Workshop and Registration information at www.sqlworkshops.com/ruk. Charges are GBP 1800 (VAT excl.). Early bird discount of GBP 125 until 18 February. We are also introducing a new referral plan. If you refer someone who participates in the workshop you will receive an Amazon gift voucher for GBP 125.Feedback from one of the participants who attended our November London workshop:Andrew, Senior SQL Server DBA from UBS, UK, www.ubs.com, November 26, 2010:Rating: In a scale of 1 to 5 please rate each item below (1=Poor & 5=Excellent) Overall I was satisfied with the workshop 5 Instructor maintained the focus of the course 5 Mix of theory and practice was appropriate 5 Instructor answered the questions asked 5 The training facility met the requirement 5 How confident are you with SQL Server 2008 performance tuning 5 Additional comments from Andrew: The course was expertly delivered and backed up with practical examples. At the end of the course I felt my knowledge of SQL Server had been greatly enhanced and was eager to share with my colleagues. I felt there was one prerequisite missing from the course description, an open mind since the course changed some of my core product beliefs. For Additional workshop feedbacks refer to: www.sqlworkshops.com/feedbacks.I will be delivering the Level 300-400 1 Day Microsoft SQL Server 2008 Performance Monitoring and Tuning Seminar at Istanbul and Ankara, Turkey during March. This event is organized by Microsoft Turkey, let me know if you are in Turkey and would like to attend.During September 2010 I delivered this Level 300-400 1 Day Microsoft SQL Server 2008 Performance Monitoring and Tuning Seminar in Zurich, Switzerland organized by Microsoft Switzerland and the feedback was 4.85 out of 5, there were about 100 participants. During November 2010 when I delivered seminar in Lisbon, Portugal organized by Microsoft Portugal, the feedback was 8.30 out of 9, there were 130 participants.Our Mission: Empower customers to fully realize the Performance potential of Microsoft SQL Server without increasing the total cost of ownership (TCO) and achieve high customer satisfaction in every consulting engagement and workshop delivery.Our Business Plan: Provide useful content in webcasts, articles and seminars to get visibility for consulting engagements and workshop delivery opportunity. Help us by forwarding this email to your SQL Server friends and colleagues.Looking forwardR Meyyappan & Team @ www.SQLWorkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • How to set conditional activation to taskflows?

    - by shantala.sankeshwar(at)oracle.com
    This article describes implementing conditional activation to taskflows.Use Case Description Suppose we have a taskflow dropped as region on a page & this region is enclosed in a popup .By default when the page is loaded the respective region also gets loaded.Hence a region model needs to provide a viewId whenever one is requested.  A consequence of this is the TaskFlowRegionModel always has to initialize its task flow and execute the task flow's default activity in order to determine a viewId, even if the region is not visible on the page.This can lead to unnecessary performance overhead of executing task flow to generate viewIds for regions that are never visible. In order to increase the performance,we need to set the taskflow bindings activation property to 'conditional'.Below described is a simple usecase that shows how exactly we can set the conditional activations to taskflow bindings.Steps:1.Create an ADF Fusion web ApplicationView image 2.Create Business components for Emp tableView image3.Create a view criteria where deptno=:some_bind_variableView image4.Generate EmpViewImpl.java file & write the below code.Then expose this to client interface.    public void filterEmpRecords(Number deptNo){            // Code to filter the deptnos         ensureVariableManager().setVariableValue("some_bind_variable",  deptNo);        this.applyViewCriteria(this.getViewCriteria("EmpViewCriteria"));        this.executeQuery();       }5.Create an ADF Taskflow with page fragements & drop the above method on the taskflow6.Also drop the view activity(showEmp.jsff) .Define control flow case from the above method activity to the view activity.Set the method activity as default activityView image7.Create  main.jspx page & drop the above taskflow as region on this pageView image8.Surround the region with the dialog & surround the dialog with the popup(id is Popup1)9.Drop the commandButton on the above page & insert af:showPopupBehavior inside the commandButton:<af:commandButton text="show popup" id="cb1"><af:showPopupBehavior popupId="::Popup1"/></af:commandButton>10.Now if we execute this main page ,we will notice that the method action gets called even before the popup is launched.We can avoid this this by setting the activation property of the taskflow to conditional11.Goto the bindings of the above main page & select the taskflow binding ,set its activation property to 'conditional' & active property to Boolean value #{Somebean.popupVisible}.By default its value should be false.View image12.We need to set the above Boolean value to true only when the popup is launched.This can be achieved by inserting setPropertyListener inside the popup:<af:setPropertyListener from="true" to="#{Somebean.popupVisible}" type="popupFetch"/>13.Now if we run the page,we will notice that the method action is not called & only when we click on 'show popup' button the method action gets called.

    Read the article

  • Keep Track of Your Tasks with toDoo

    - by Asian Angel
    A tasks list can be convenient but most times you can not include details for those tasks or have to have an online account to do so. If you want to keep your tasks list with you on your computer or laptop and be able to add plenty of details then you might want to look at toDoo. Note: Requires Adobe AIR (download link at bottom of article). toDoo in Action Once you have installed toDoo everything is rather straightforward for getting started. The first time that you start toDoo there will be a temporary “fill-in” for the “Subject & Details Areas”. Simply highlight over the temporary text and add your information. Notice that if desired you can easily set a custom date and time for your tasks right below the “Details Area”. Note: toDoo does not minimize to the “System Tray”. Once you have everything set all that you need to do is click on “add task”. Here was our first new task being viewed in the “toDoo Description Tab”. Time to add a second task…here you can see the drop-down calendar. You can scroll through and select a different month very easily…just click on the desired day and it will be automatically set. Adding our second task… If you need to edit any of the details for a particular task you can do so in the “Edit toDoo Tab”. This nice little app is convenient and easy to use. Conclusion ToDoo is a simple straightforward app that lets you keep track of your tasks list and relevant details without an online account (especially helpful if you are without a wireless connection at a given moment). If you are looking for more of a list approach that runs on your desktop, then check out our article on Doomi here. Links Download ToDoo at Softpedia Download ToDoo at Adobe Marketplace Download Adobe AIR Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageMake To-Do Bar in Outlook 2007 Show Only Today’s TasksAdd a non-Google Tasks List to ChromeKeep Track of Homework Assignments with SoshikuTrack the Amount of Time You Spend Online in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Shutdown Hangs for 5 Minutes on Kubuntu 14.04

    - by Augustinus
    I've had persistent problems with a 5 minute hang at shutdown for the last three versions of Kubuntu (13.04, 13.10, and now 14.04). I suspect this is not a KDE-specific problem. Recently, I performed a fresh installation of Kubuntu 14.04 from a live-USB, and shutdown worked normally for about a week. The hang-up is now happening again, and I can't figure out why. A brief description of the problem: The hang-up occurs with all methods of initiating a normal shutdown: Clicking the shutdown or restart button in KDE, sudo shutdown -h now, sudo reboot The shutdown splash screen appears. Using the down-arrow to access verbose messages, I see "Asking all remaining processes to terminate." This message remains for 5 minutes with no disk activity. Finally, a rapid series of messages flurries to the screen: * All processes ended within 300 seconds... [ OK ] nm-dispatcher.action: Caught signal 15, shutting down... ModemManager[852]: <warn> Could not acquire the 'org.freedesktop.ModemManager1' service name ModemManager[852]: <info> ModemManager is shut down * Deactivating swap... [ OK ] * Unmounting local filesystems... [ OK ] * Will now restart` Possible Sources of the Problem: Before the problem re-appeared, I have mainly been doing routine computing. I have kept the system up-to-date using apt-get upgrade and apt-get dist-upgrade. The only other notable incident was a power failure. I do not have the computer connected to a UPS, so the power failure resulted in an immediate shutdown. Could this have corrupted an important file which must be accessed at shutdown? Is there any way that could cause a 5-minute hang-up? Here is a list of packages that have been updated before the problem appeared: bash iotop dpkg dpkg-dev python3-software-properties libdpkg-perl software-properties-kde software-properties-common akonadi-backend-mysql libakonadiprotocolinternals1 akonadi-server firefox-locale-en firefox flashplugin-installer libqapt2 libqapt2-runtime thunderbird openjdk-7-jre-headless thunderbird-locale-en kubuntu-driver-manager qapt-deb-installer openjdk-7-jre qapt-batch icedtea-7-jre-jamvm libelf1 dpkg dpkg-dev libdpkg-perl libjbig0 gettext-base libgettextpo-dev libssl1.0.0 libgettextpo0 libasprintf-dev linux-headers-3.13.0-24 gettext libasprintf0c2 linux-headers-3.13.0-24-generic openssl linux-libc-dev gstreamer0.10-qapt kubuntu-desktop linux-image-extra-3.13.0-24-generic linux-image-3.13.0-24-generic I would appreciate any help with this.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >