Search Results

Search found 944 results on 38 pages for 'exposed'.

Page 30/38 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • Creating a really public Windows network share

    - by Timur Aydin
    I want to create a shared folder under Windows (actually, Windows XP, Vista, and Win 7) which can be mounted from a linux system without prompting for a username/password. But before attempting this, I first wanted to establish that this works between two Windows 7 machines. So, on machine A (The server that will hold the public share), I created a folder and set its permissions such that Everyone has read/write access. Then I visited Control Panel - Network and Sharing Center - Advanced Sharing Settings and then selected "Turn off password protected sharing". Then, on machine B (The client that wants to access the public share with no username/password prompt), I tried to "map network driver" and I was immediately prompted by a password prompt. Some search on google suggested changing "Acconts: Limit local account use of blank passwords to console logon only" to "Disabled". Tried that, no luck, still getting username/password prompt. If I enter the username/password, I am not prompted for it again and can use the share as long as the session is active. But still, I really need to access the share without any username/password transaction whatsoever and this is not just a convenience related thing. Here is the actual reason: The device that will access this windows network share is an embedded system running uclinux. It will mount this share locally and then play media files. Its only user interface is a javascript based web page. So, if there is going to be any username/password transaction, I would have to ask the user to enter them over the web page, which will be ridiculously insecure and completely exposed to packet sniffing. After hours of doing experiments, I have found one way to make this happen, but I am not really very fond of it... I first create a new user (shareuser) and give it a password (sharepass). Then I open Group Policy Editor and set "Deny log on locally" to "A\shareuser". Then, I create a folder on A and share it so that shareuser has Read access to it. This way, shareuser cannot login to A, but can access the shared folder. And, if someone discovers the shareuser/sharepass through network sniffing, they can just access the shared folder, but can't logon to A. The same thing can be achieved by enabling the Guest user and then going to Group Policy Editor and deleting the "Guest" from the "Deny access to this computer from the network" setting. Again, Guest can mount the public share, but logging in to A as Guest won't be possible, because Guest is already not allowed to log in by default. So my question would be, how can I create a network share that is truly public, so that it can be mounted from a linux machine without requiring a password? Sorry for the long question, but I wanted to explain the reason for really needing this...

    Read the article

  • I have been told to accept one error with Memtest86+

    - by DustByte
    Bought a new computer back in August with 4x4 GB RAM. Had problems with the RAM. They sent me four new sticks, which also generated errors. Singled out four sticks (from the eight I now had) that didn't generate any errors. Discovered by coincident a new RAM error last week (this time no BSOD). Contacted the company. According to them there have been issues with a bad stock from last summer so I got two tested 8 GB sticks sent to me. Been running Memtest86+ over the weekend. After 20 hours I got an error (see attached photo). The test has now been running for 37 hours but so far only this one error. I contacted the company where I bought the computer. They wrote back: I wouldn't worry about hat one fail. We have had similar situations here whereby it passes numerous times but then fails once. We think it's an issue with memtest, after all memory is faulty or it isn't so you can't really have it pass a few times, fail the next time around and then pass again! Please trust me on this and continue with the memory we sent you and if your problems continue we'll look at getting it replaced again. I gather from other forum posts that many people do not accept a single error. What could this single error signify, faulty RAM or a glitch in the MEMTEST program (or other)? Update: From the helpful comments below I conclude that an occasional (and rare) "random" error could occur and be acceptable, but repeated errors at the same address would indicate malfunction. Memtest has now run for 45 hours and I still have only one error. For everyone's information, I will keep running the test. In less than two days I am going away for a month. I will most likely leave Memtest running. As I do not have a UPS there is a risk that a power outage will ruin the experiment. The computer is a desktop so I cannot bring it with me (which would curiously have exposed it to more cosmic rays as I will be flying ;)).

    Read the article

  • web services access not being reached thru the web browser [closed]

    - by Tony
    I am trying to reference my .asmx webservices in .NET but my server is not exposed to the internet. When I put on the following address I get the message mentioned below. What's the reason for not being able to see the directory? Am I missing something in my IIS configuraction? Am I missing anything in my permissions? Just as reference I have other folders with webservices and I have the same issue. When I login to the server I am doing it with my windows user and password (I am using windows authentication). It's necessary to mention that when I put the URL I am getting a popup screen to put in my userid and password but it seems that's not able to validate since keeps asking me a couple of times. Let me know if you need more information to address this issue . http://appsvr02/Inetpub/wwwroot/DevWebApi/ Internet Explorer cannot display the webpage What you can try: It appears you are connected to the Internet, but you might want to try to reconnect to the Internet. Retype the address. Go back to the previous page. Most likely causes: •You are not connected to the Internet. •The website is encountering problems. •There might be a typing error in the address. More information This problem can be caused by a variety of issues, including: •Internet connectivity has been lost. •The website is temporarily unavailable. •The Domain Name Server (DNS) is not reachable. •The Domain Name Server (DNS) does not have a listing for the website's domain. •If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds 1.Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) 1.Click Tools , and then click Work Offline. 2.Click the Favorites Center button , click History, and then click the page you want to view.

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user225556
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user224313
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • “Being Agile” Means No Documentation, Right?

    - by jesschadwick
    Ask most software professionals what Agile is and they’ll probably start talking about flexibility and delivering what the customer wants.  Some may even mention the word “iterations”.  But inevitably, they’ll say at some point that it means less or even no documentation.  After all, doesn’t creating, updating, and circulating painstakingly comprehensive documentation that everyone and their mother have officially signed off on go against the very core of Agile?  Of course it does!  But really, they’re missing the point! Read The Agile Manifesto. (No, seriously - read it now. It’s short. I’ll wait.)  It’s essentially a list of values.  More specifically, it’s a right-side/left-side weighted list of values:  “Value this over that”. Many people seem to get the impression that this is really a “good vs. bad” list and that those values on the right side are evil and should essentially be tossed on the floor.  This leads to the conclusion that in order to be Agile we must throw away our fancy expensive tools, document as little as possible, and scoff at the idea of a project plan.  This conclusion is quite convenient because it essentially means “less work, more productivity!” (particularly in regards to the documentation and project planning).  I couldn’t disagree with this conclusion more. My interpretation of the Manifesto targets “over” as the operative word.  It’s not just a list of right vs. wrong or good vs. bad.  It’s a list of priorities.  In other words, none of the concepts on the list should be removed from your development lifecycle – they are all important… just not equally important.  This is not a unique interpretation, in fact it says so right at the end of the manifesto! So, the next time your team sits down to tackle that big new project, don’t make the first order of business to outlaw all meetings, documentation, and project plans.  Instead, collaborate with both your team and the business members involved (you do have business members sitting in the room, directly involved in the project planning, right?) and determine the bare minimum that will allow all of you to work and communicate in the best way possible.  This often means that you can pick and choose which parts of the Agile methodologies and process work for your particular project and end up with an amalgamation of Waterfall, Agile, XP, SCRUM and whatever other methodologies the members of your team have been exposed to (my favorite is “SCRUMerfall”). The biggest implication of this is that there is no one way to implement Agile.  There is no checklist with which you can tick off boxes and confidently conclude that, “Yep, we’re Agile™!”  In fact, depending on your business and the members of your team, moving to Agile full-bore may actually be ill-advised.  Such a drastic change just ends up taking everyone out of their comfort zone which they inevitably fall back into by the end of the project.  This often results in frustration to the point that Agile is abandoned altogether because “we just need to ship something!”  Needless to say, this is far more devastating to a project. Instead, I offer this approach: keep it simple and take it slow.  If your business members or customers are only involved at the beginning phases and nowhere to be seen until the project is delivered, invite them to your daily meetings; encourage them to keep up to speed on what’s going on on a daily basis and provide feedback.  If your current process is heavy on the documentation, try to reduce it as opposed to eliminating it outright.  If you need a “TPS Change Request” signed in triplicate with a 5-day “cooling off period” before a change is implemented, try a simple bug tracking system!  Tighten the feedback loop! Finally, at the end of every “iteration” (whatever that means to you, as long as it’s relatively frequent), take as much time as you can spare (even if it’s an hour or so) and perform some kind of retrospective.  Learn from your mistakes.  Figure out what’s working for you and what’s not, then fix it.  Before you know it you’ve got a handful of iterations and/or projects under your belt and you sit down with your team to realize that, “Hey, this is working - we’re pretty Agile!”  After all, Agile is a Zen journey.  It’s a destination that you aim for, not force, and even if you never reach true “enlightenment” that doesn’t mean your team can’t be exponentially better off from merely taking the journey.

    Read the article

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • Aamir Khan’s Satyamev Jayate stirs a movement

    - by Gopinath
    Bollywood actor Aamir Khan is known for his dedication and hard work in inspiring millions of viewers though movies by discussing social problems and motivating people to solve them. His movie Rang De Basanthi seeded Indian anti-corruption movement, Tare Zameen Par touched the problems faced by few challenged kids and the latest movie 3 idiots exposed how education institutions in India are producing lakhs of Donkeys out of colleges every year. He extended his dedication of serving the society to small screen with the launch of reality TV show Satyamev Jayate. Before you start misjudging it as one of those non sense drama / entertaining reality shows, let me tell you that it is not a typical music, games, fight or dance reality show. Satyamev Jayate is all about the real people of India, their problems and how to tackle them.  This is not just a reality show, its movement to educate people about the social evils. Its been many years since I spent couple of hours  in front of TV as most of the programs are too cynical or does not add much value.  In my childhood I use to anxiously wait for Mahabarath or He-Man TV shows to start but after a two decades I waited anxiously for the start of Satyamev Jayate. The wait was worth and the 1 hours 30 minutes spent watching it meaningful. When was the last time you were so satisfied after watching a TV show and inspired to do something? I don’t remember. Today, the show focused on female foeticide and its impact. It showed women who were tortured and forced to abort female foetuses. On the show few brave women shared their experiences of giving birth to girl babies and rough times they are going through with their in-laws & husbands. The show not only focused on the problem but also on the root cause of the evil,  inspiring people working to tackle it and what every individual can do his part to solve it.  The best part of the show is,  its not a blame game. When there is a problem most of the people quickly get into identifying who is wrong and start blaming them instead of solve the actual problem.  Aamir did not blame anyone for female foeticide – neither the government who don’t impose strict rules, nor the doctors who abort girl babies to make money or the mother-in-laws & husbands who torcher girl baby mothers are blamed. He careful highlighted the problem, showed horrifying statistics and their impact on the future society and few inspiring people working to tackle the problem.  He touched heart and stirred a movement against the issue. First time ever I voted for a reality show through SMS and it’s for Satyamev Jayate. I’m proud to do so. Here are the few reactions of popular people, activists & media about the program @aamir_khan absolutely the best program I have seen on TV in recent past. Thanku for converting an idiot box into an inspirationsl medium — Kiran Bedi (@thekiranbedi) May 6, 2012 Satyamev Jayate proves tht TV 2 can b a tool of social change. — Shekhar Kapur (@shekharkapur) May 6, 2012 i absolutely loved #satyamevjayate. at least aamir is doing what all of us only talk about. — Harsha Bhogle (@bhogleharsha) May 6, 2012 Now Television will no longer be called an idiot box,the VISION of Television broadens up with#SatyamevJayate !!! — Madhur Bhandarkar (@mbhandarkar268) May 6, 2012 The Sunday 11am slot seems to have come back with a bang… #SatyamevJayate — atul kasbekar (@atulkasbekar) May 6, 2012   I was spellbound, says Prasoon Joshi – It’s a unique show. I was completely bowled over by it. It’s a never-done before concept Aamir Khan strikes the right chord with Satyamev Jayate – The format is quite crisp. Talking about the emotional connect, there are moments when your eyes well up with tears, but the various segments ensure there’s more content than emotional drama ‘Satyamev Jayate’ gutsy, sensible show: Viewers – From filmmakers to clinical psychologists to professors – everyone has given the thumbs up to Aamir Khan’s television show ‘Satyamev Jayate’, saying it is a gutsy, hard-hitting and sensible programme that strikes an emotional chord with the audiences. Aamir Khan’s TV debut ‘Satyamev Jayate’ takes Twitter by storm – The roads of the capital sported a deserted look around 11 am on Sunday morning, as everyone was hooked on to their TV sets. Did you watch the program? What is your opinion? I’m waiting for next 11 AM of next Sunday. Are you?

    Read the article

  • Silverlight Cream for December 05, 2010 -- #1003

    - by Dave Campbell
    In this (Almost) All-Submittal Issue: John Papa(-2-), Jesse Liberty, Tim Heuer, Dan Wahlin, Markus Egger, Phil Middlemiss, Coding4Fun, Michael Washington, Gill Cleeren, MichaelD!, Colin Eberhardt, Kunal Chowdhury, and Rabeeh Abla. Above the Fold: Silverlight: "Two-Way Binding on TreeView.SelectedItem" Phil Middlemiss WP7: "Taking Screen Shots of Windows Phone 7 Panorama Apps" Markus Egger Training: "Beginners Guide to Visual Studio LightSwitch (Part - 4)" Kunal Chowdhury Shoutouts: Don't let the fire go out... check out the Firestarter Labs Bart Czernicki discusses the need for 64-bit Silverlight: Why a 64-bit runtime for Silverlight 5 Matters Laurent Duveau is interviewed by the SilverlightShow folks to discuss his WP7 app: Laurent Duveau on Morse Code Flash Light WP7 Application From SilverlightCream.com: John Papa: Silverlight 5 Features John Papa has a post up highlighting his take on what's cool in the new featureset for Silverlight 5... including an external link to the keynote. Silverlight Firestarter Keynote and Sessions John Papa also has posted links to all the individual session videos... what a great resource! Yet Another Podcast #17 – Scott Guthrie Jesse Liberty went big with his latest Yet Another Podcast ... he is interviewing Scott Guthrie about the Firestarter, Silverlight, WP7. and more. Silverlight 5 Plans Revealed With this post from Tim Heuer, I find myself adding a Silverlight 5 tag... so bring on the fun! ... unless you've been overloaded like I have since last Thursday, you've probably seen this, but what the heck... Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code Phoenix's own Dan Wahlin had a great WCF RIA Services presentation at the Firestarter last week, and his material and lots of other good links are up on his blog, and I'd say that even if he didn't have a couple shoutouts to me in it :) Thanks Dan!! Taking Screen Shots of Windows Phone 7 Panorama Apps Markus Egger helps us all out with a post on how to get screenshots of your WP7 Panorama app... in case you haven't tried it ... it's not as easy as it sounds! Two-Way Binding on TreeView.SelectedItem Phil Middlemiss is back with a post taking some of the mystery out of the TreeView control bound to a data context and dealing with the SelectedItem property... oh yeah, and throw all that into MVVM! Great tutorial as usual, a cool behavior, and all the source. Native Extensions for Microsoft Silverlight Alan Cobb pointed me to a quick post up on the Coding4Fun site about the NESL (Native Extensions for SilverLight) from Microsoft that give access to some cool features of Windows 7 from Silverlight... I added an NESL tag in case other posts appear on this subject. Silverlight Simple Drag And Drop / Or Browse View Model / MVVM File Upload Control Michael Washington has another great tutorial up at CodeProject that expands on prior work he'd done with drag/drop file upload with this post on integrating an updated browse/upload into ViewModel/MVVM projects, all of which is Blendable. The validation story in Silverlight (Part 1) In good news for all of us, Gill Cleeren has started a tutorial series at SilverlightShow on Silverlight Validation. The first one is up discussing the basics... The Common Framework MichaelD! has a WPF/Silverlight framework up with Facebook Authentication, Xaml-driven IOC, T4 synchronous WCF proxies, and WP7 on the roadmap... source on CodePlex, check it out and give him some feedback. Exploring Reactive Extensions (Rx) through Twitter and Bing Maps Mashups If you've been waiting around to learn Rx, Colin Eberhardt has the post up for you (and me)... great tutorial up on Twitter and Bing Maps Mashups ... and all the code... for the twitter immediate app, and also the UKSnow one we showed last week... check out the demo page, and grab the source! Beginners Guide to Visual Studio LightSwitch (Part - 4) Kunal Chowdhury has the 4th part of his Lightswitch tutorial series up at SilverlightShow. In this one, he shows how to integrate multiple tables into a screen. It is here Take Your Silverlight Application Full Screen & intercept all windows keys !! Rabeeh Abla sent me this link to the blog describing a COM exposed library that intercepts all keys when Silverlight is full-screen. There are a few I hit when I'm going through blogs that Ctrl-W (FF) just won't take down and that annoys me... so this might be a solution if you have that problem... worth a look anyway! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Searching for the last logon of users in Active Directory

    - by Robert May
    I needed to clean out a bunch of old accounts at Veracity Solutions, and wanted to delete those that hadn’t used their account in more than a year. I found that AD has a property on objects called the lastLogonTimestamp.  However, this value isn’t exposed to you in any useful fashion.  Sure, you can pull up ADSI Edit and and eventually get to it there, but it’s painful. I spent some time searching, and discovered that there’s not much out there to help, so I thought a blog post showing exactly how to get at this information would be in order. Basically, what you end up doing is using System.DirectoryServices to search for accounts and then filtering those for users, doing some conversion and such to make it happen.  Basically, the end result of this is that you get a list of users with their logon information and you can then do with that what you will.  I turned my list into an observable collection and bound it into a XAML form. One important note, you need to add a reference to ActiveDs Type Library in the COM section of the world in references to get to LargeInteger. Here’s the class: namespace Veracity.Utilities { using System; using System.Collections.Generic; using System.DirectoryServices; using ActiveDs; using log4net; /// <summary> /// Finds users inside of the active directory system. /// </summary> public class UserFinder { /// <summary> /// Creates the default logger /// </summary> private static readonly ILog log = LogManager.GetLogger(typeof(UserFinder)); /// <summary> /// Finds last logon information /// </summary> /// <param name="domain">The domain to search.</param> /// <param name="userName">The username for the query.</param> /// <param name="password">The password for the query.</param> /// <returns>A list of users with their last logon information.</returns> public IList<UserLoginInformation> GetLastLogonInformation(string domain, string userName, string password) { IList<UserLoginInformation> result = new List<UserLoginInformation>(); DirectoryEntry entry = new DirectoryEntry(domain, userName, password, AuthenticationTypes.Secure); DirectorySearcher directorySearcher = new DirectorySearcher(entry); directorySearcher.PropertyNamesOnly = true; directorySearcher.PropertiesToLoad.Add("name"); directorySearcher.PropertiesToLoad.Add("lastLogonTimeStamp"); SearchResultCollection searchResults; try { searchResults = directorySearcher.FindAll(); } catch (System.Exception ex) { log.Error("Failed to do a find all.", ex); throw; } try { foreach (SearchResult searchResult in searchResults) { DirectoryEntry resultEntry = searchResult.GetDirectoryEntry(); if (resultEntry.SchemaClassName == "user") { UserLoginInformation logon = new UserLoginInformation(); logon.Name = resultEntry.Name; PropertyValueCollection timeStampObject = resultEntry.Properties["lastLogonTimeStamp"]; if (timeStampObject.Count > 0) { IADsLargeInteger logonTimeStamp = (IADsLargeInteger)timeStampObject[0]; long lastLogon = (long)((uint)logonTimeStamp.LowPart + (((long)logonTimeStamp.HighPart) << 32)); logon.LastLogonTime = DateTime.FromFileTime(lastLogon); } result.Add(logon); } } } catch (System.Exception ex) { log.Error("Failed to iterate search results.", ex); throw; } return result; } } } Some important things to note: Username and Password can be set to null and if your computer us part of the domain, this may still work. Domain should be set to something like LDAP://servername/CN=Users,CN=Domain,CN=com You’re actually getting a com object back, so that’s why the LongInteger conversions are happening.  The class for UserLoginInformation looks like this:   namespace Veracity.Utilities { using System; /// <summary> /// Represents user login information. /// </summary> public class UserLoginInformation { /// <summary> /// Gets or sets Name /// </summary> public string Name { get; set; } /// <summary> /// Gets or sets LastLogonTime /// </summary> public DateTime LastLogonTime { get; set; } /// <summary> /// Gets the age of the account. /// </summary> public TimeSpan AccountAge { get { TimeSpan result = TimeSpan.Zero; if (this.LastLogonTime != DateTime.MinValue) { result = DateTime.Now.Subtract(this.LastLogonTime); } return result; } } } } I hope this is useful and instructive. Technorati Tags: Active Directory

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • How to Increase the VMWare Boot Screen Delay

    - by Trevor Bekolay
    If you’ve wanted to try out a bootable CD or USB flash drive in a virtual machine environment, you’ve probably noticed that VMWare’s offerings make it difficult to change the boot device. We’ll show you how to change these options. You can do this either for one boot, or permanently for a particular virtual machine. Even experienced users of VMWare Player or Workstation may not recognize the screen above – it’s the virtual machine’s BIOS, which in most cases flashes by in the blink of an eye. If you want to boot up the virtual machine with a CD or USB key instead of the hard drive, then you’ll need more than an eye’s-blink to press Escape and bring up the Boot Menu. Fortunately, there is a way to introduce a boot delay that isn’t exposed in VMWare’s graphical interface – you have to edit the virtual machine’s settings file (a .vmx file) manually. Editing the Virtual Machine’s .vmx Find the .vmx file that contains the settings for your virtual machine. You chose a location for this when you created the virtual machine – in Windows, the default location is a folder called My Virtual Machines in your My Documents folder. In VMWare Workstation, the location of the .vmx file is listed on the virtual machine’s tab. If in doubt, search your hard drive for .vmx files. If you don’t want to use Windows default search, an awesome utility that locates files instantly is Everything. Open the .vmx file with any text editor. Somewhere in this file, enter in the following line… save the file, then close out of the text editor: bios.bootdelay = 20000 This will introduce a 20 second delay when the virtual machine loads up, giving you plenty of time to press the Escape button and access the boot menu. The number in this line is just a value in milliseconds, so for a five second boot delay, enter 5000, and so on. Change Boot Options Temporarily Now, when you boot up your virtual machine, you’ll have plenty of time to enter one of the keystrokes listed at the bottom of the BIOS screen on boot-up. Press Escape to bring up the Boot Menu. This allows you to select a different device to boot from – like a CD drive. Your selection will be forgotten the next time you boot up this virtual machine. Change Boot Options Permanently When the BIOS screen comes up, press F2 to enter the BIOS Setup menu. Switch to the Boot tab, and change the ordering of the items by pressing the “+” key to move items up on the list, and the “-” key to move items down the list. We’ve switched the order so that the CD-ROM Drive boots first. Once you make this change permanent, you may want to re-edit the .vmx file to remove the boot delay. Boot from a USB Flash Drive One thing that is noticeably missing from the list of boot options is a USB device. VMWare’s BIOS just does not allow this, but we can get around that limitation using the PLoP Boot Manager that we’ve previously written about. And as a bonus, since everything is virtual anyway, there’s no need to actually burn PLoP to a CD. Open the settings for the virtual machine you want to boot with a USB drive. Click on Add… at the bottom of the settings screen, and select CD/DVD Drive. Click Next. Click the Use ISO Image radio button, and click Next. Browse to find plpbt.iso or plpbtnoemul.iso from the PLoP zip file. Ensure that Connect at power on is checked, and then click Finish. Click OK on the main Virtual Machine Settings page. Now, if you use the steps above to boot using that CD/DVD drive, PLoP will load, allowing you to boot from a USB drive! Conclusion We’re big fans of VMWare Player and Workstation, as they let us try out a ton of geeky things without worrying about harming our systems. By introducing a boot delay, we can add bootable CDs and USB drives to the list of geeky things we can try out. Download PLoP Boot Manager Similar Articles Productive Geek Tips How To Switch to Console Mode for Ubuntu VMware GuestHack: Turn Off Debug Mode in VMWare Workstation 6 BetaStart Your Computer More Quickly by Delaying the Startup of a Service in VistaEnable Hidden BootScreen in Windows VistaEnable Copy and Paste from Ubuntu VMware Guest TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Oracle Fusion Middleware Innovation Award Winners 2012: ADF & Fusion Development

    - by Dana Singleterry
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Fusion Middleware Innovation Awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The awards were presented during Oracle OpenWorld 2012 and following winners are for the category of ADF & Fusion Development. Micros – an OPN Platinum partner – has been working closely with Oracle product management teams in applying industry best practices in the development of their solutions. Their current application suite for the hospitality industry was built on Oracle Forms and the Oracle database running on MS Windows. The next generation of this suite is being developed and released in modules that are now based on Oracle FMW (including ADF) 11g technologies and Oracle Database 11g all running on Oracle Linux. The primary driver was that of modernization and hence the reason Oracle ADF was selected to provide a rich UI for business processes that could be served up through traditional methods or through mobile devices globally. SOA Suite & ADF allowed for loosely-coupled services that could evolve with the needs of the business. Micros's application innovations includes the use of business application portlets that have been published from ADF Faces Task Flows generated using WebCenter portlet libraries  & Oracle Metadata Services (MDS) with multi-layered customizations using Oracle WebCenter Composer. PCS (Marfin Egnatia Bank of Greece) – PCS Wealth Management is a WM Software Solution, which captures and automates the WM business processes allowing Service Providers to allocate enough time and effort into Customer Service and Investment Strategies, under Advisory or Execution-Only Services. The Product is built upon the latest Web Technologies and ensures Best Practices covering all functional expectations, meeting local regulatory requirements and discovering successful opportunities for the WM Customers' Portfolios. The new unified Wealth Management system offers an unparalleled User Interface taking full advantage of the user friendly ADF Faces Components to a great extent, all serving Private Banking purposes. The application offers a true Account Officer Cockpit with shallow navigation, one-click access to informed decisions and a perfect customer service. ADF Grids and Pivots, the Data Visualization Components, as well as the Calendar and Map Components are cleverly used to help the user eliminate the usage of Excel, Outlook and other systems. PCS's application is unique in the way it leverages the ADF Faces data visualization components to create a truly attractive and insightful dashboard for their application. PCS Wealth Management Demo Qualcomm – Qualcomm, a $17B per year company, designs and sells semiconductor products for wireless telecommunications, mobile and computing markets. In addition, Qualcomm companies provide various hardware and software products to facilitate the design, development and deployment of phones and the applications that run on them. Qualcomm’s challenge has been to not only develop and deploy new business system functions to keep pace with customer demand, but also to provide a customer collaboration capability that is sufficiently robust, easy to use, and flexible to meet emerging and future needs. Qualcomm has taken successful steps in building and deploying the customer engagement platform Ieveraging various Oracle technologies including Fusion Middleware (ADF, SOA, OBIEE) and their proven ERP foundation of EBS and 11g databases. The new platform delivers a more unified and “seamless” business solution with a consistent, modern “look and feel” all based on standard business processes which facilitate efficient collaboration with Qualcomm and its customers. The look and feel leverages ADF in innovative ways and includes hover over navigation, custom pagination components, and skinning. Qualcomm has exposed a services layer that provides significant functionality including order-to-ship, quote-to-order, customer on-boarding and contract validation. Qualcomm's creative designs leverage Oracle's SOA Suite to integrate with Oracle EBS and desperate applications to provide a rich user interface through the use use of Oracle ADF Faces Rich Client Components providing a self-service solution to their customers.

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • NTFS Issues in Windows 7 and 2008 R2 - 'Is it a Bug?'

    - by renewieldraaijer
    I have been using the various versions of the Microsoft Windows product line since NT4 and I really thought I knew the ins and outs about the NTFS filesystem by now. There were always a few rules of thumb to understand what happens if you move data around. These rules were: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. The same goes for moving data between disk partitions. Only when you move data within the same partition, the permissions are kept."  Recently I was asked to assist in troubleshooting some NTFS related issues. This forced me to have another good look at this theory. To my surprise I found out that this theory does not completely stand anymore. Apparently some things have changed since the release of Windows Vista / Windows 2008. Since the release of these Operating Systems, a move within the same disk partition results in the data inheriting the permissions of the location it is being copied into. A major change in the NTFS filesystem you would think!  Not quite! The above only counts when the move operation is being performed by using Windows Explorer. A move by using the 'move' command from within a cmd prompt for example, retains the NTFS permissions, just like before in Windows XP and older systems. Conclusion: The Windows Explorer is responsible for changing the ACL's of the moved data. This is a remarkable change, but if you follow this theory, the resulting ACL after a move operation is still predictable.  We could say that since Windows Vista and Windows 2008, a new rule set applies: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. Same goes for moving data between disk partitions and within disk partitions. Only when you move data within the same partition by using something else than the Windows Explorer, the permissions are kept." The above behavior should be unchanged in Windows 7 / Windows 2008 R2, compared to Windows Vista / 2008. But somehow the NTFS permissions are not so predictable in Windows 7 and Windows 2008 R2. Moving data within the same disk partition the one time results in the permissions being kept and the next time results in inherited permissions from the destination location. I will try to demonstrate this in a few examples: Example 1 (Incorrect behavior): Consider two folders, 'Folder A' and 'Folder B' with the following permissions configured.                    Now we create the test file 'test file 1.txt' in 'Folder A' and afterwards move this file to 'Folder B' using Windows Explorer.                       According to the new theory, the file should inherit the permissions of 'Folder B' and therefore 'Group B' should appear in the ACL of 'test file 1.txt'. In the screenshot below the resulting permissions are displayed. The permissions from the originating location are kept, while the permissions of 'Folder B' should be inherited.                   Example 2 (Correct behavior): Again, consider the same two folders. This time we make a small modification to the ACL of 'Folder A'. We add 'Group C' to the ACL and again we create a file in 'Folder A' which we name 'test file 2.txt'.                    Next, we move 'test file 2.txt' to 'Folder B'.                       Again, we check the permissions of 'test file 2.txt' at the target location. We can now see that the permissions are inherited. This is what should be happening, and can be considered 'correct behavior' for Windows Vista / 2008 / 7 / 2008 R2. It remains uncertain why this behavior is so inconsistent. At this time, this is under investigation with Microsoft Support. The investigation has been going for the last two weeks and it is beginning to look like there is no rational reason for this, other than a bug in the Windows Explorer in Windows 7 and 2008 R2. As soon as there is any certainty on this, I will note it here in this blog.                   The examples above are harmless tests, by using my own laptop. If you would create the same set of folders and groups, and configure exactly the same permissions, you will see exactly the same behavior. Be sure to use Windows 7 or Windows 2008 R2.   Initially the problem arose at a customer site where move operations on data on the fileserver by users would result in unpredictable results. This resulted in the wrong set of people having àccess permissions on data that they should not have permissions to. Off course this is something we want to prevent at all costs.   I have also done several tests with move operations by using the move command in a cmd prompt. This way the behavior is always consistent. The inconsistent behavior is only exposed when using the Windows Explorer to initiate the move operation, and only when using Windows 7 or Windows 2008 R2 systems. It is evident that this behavior changes when the ACL of a folder has been changed, for example by adding an extra entry. The reason for this remains uncertain though. To be continued…. A dutch version of this post can be found at: http://blogs.platani.nl/?p=612

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • Persisting settings without using Options dialog in Visual Studio

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2013/11/02/persisting-settings-without-using-options-dialog-in-visual-studio.aspxIn one of my previous blog post we have seen persisting settings using Visual Studio's options dialog. Visual Studio options has many advantages in automatically persisting user options for you. However, during our latest Team Rooms extension development, we decided to provide our users; ability to use our preferences directly from Team Explorer. The main reason was that we had only one simple option for user and we thought it is cumbersome for user to go to Tools –> Options dialog to change this. Another reason was, we wanted to highlight this setting to user as soon as he is using our extension.   So if you are in such a scenario where you do not want to use VS options window, but still would like to persist the settings, this post will guide you through. Visual Studio SDK provides two ways to persist settings in your extensions. One is using DialogPage as shown in my previous post. Another way is to use by implementing IProfileManager interface which I will explain in this post. Please note that the class implementing IProfileManager should be independent class. This is because, VS instantiates this class during Tools –> Import and Export Settings. IProfileManager provides 2 different sets of methods (total 4 methods) to persist the settings. They are LoadSettingsFromXml and SaveSettingsToXml – Implement these methods to persist settings to disk from VS settings storage. The VS will persist your settings along with other options to disk. LoadSettingsFromStorage and SaveSettingsToStorage – Implement these methods to persist settings to local storage, usually it be registry. VS calls LoadSettingsFromStorage method when it is initializing the package too. We are going to use the 2nd set of methods for this example. First, we are creating a separate class file called UserOptions.cs. Please note that, we also need to implement IComponent, which can be done by inheriting Component along with IProfileManager. [ComVisible(true)] [Guid("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")] public class UserOptions : Component, IProfileManager { private const string SUBKEY_NAME = "TForVS2013"; private const string TRAY_NOTIFICATIONS_STRING = "TrayNotifications"; ... } Define the property so that it can be used to set and get from other classes. public bool TrayNotifications { get; set; } Implement the members of IProfileManager. public void LoadSettingsFromStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME)) { if (reg != null) { // Key already exists, so just update this setting. TrayNotifications = Convert.ToBoolean(reg.GetValue(TRAY_NOTIFICATIONS_STRING, true)); } } } catch (TeamRoomException exception) { TrayNotifications = true; ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void LoadSettingsFromXml(IVsSettingsReader reader) { reader.ReadSettingBoolean(TRAY_NOTIFICATIONS_STRING, out _isTrayNotificationsEnabled); TrayNotifications = (_isTrayNotificationsEnabled == 1); } public void ResetSettings() { } public void SaveSettingsToStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME, true)) { if (reg != null) { // Key already exists, so just update this setting. reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } else { reg = Package.UserRegistryRoot.CreateSubKey(SUBKEY_NAME); reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } } } catch (TeamRoomException exception) { ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void SaveSettingsToXml(IVsSettingsWriter writer) { writer.WriteSettingBoolean(TRAY_NOTIFICATIONS_STRING, TrayNotifications ? 1 : 0); } Let me elaborate on the method implementation. The Package class provides UserRegistryRoot (which is HKCU\Microsoft\VisualStudio\12.0 for VS2013) property which can be used to create and read the registry keys. So basically, in the methods above, I am checking if the registry key exists already and if not, I simply create it. Also, in case there is an exception I return the default values. If the key already exists, I update the value. Also, note that you need to make sure that you close the key while exiting from the method. Very simple right? Accessing and settings is simple too. We just need to use the exposed property. UserOptions.TrayNotifications = true; UserOptions.SaveSettingsToStorage(); Reading settings is as simple as reading a property. UserOptions.LoadSettingsFromStorage(); var trayNotifications = UserOptions.TrayNotifications; Lastly, the most important step. We need to tell Visual Studio shell that our package exposes options using the UserOptions class. For this we need to decorate our package class with ProvideProfile attribute as below. [ProvideProfile(typeof(UserOptions), "TForVS2013", "TeamRooms", 110, 110, false, DescriptionResourceID = 401)] public sealed class TeamRooms : Microsoft.VisualStudio.Shell.Package { ... } That's it. If everything is alright, once you run the package you will also see your options appearing in "Import Export settings" window, which allows you to export your options.

    Read the article

  • List of Commonly Used Value Types in XNA Games

    - by Michael B. McLaughlin
    Most XNA programmers are concerned about generating garbage. More specifically about allocating GC-managed memory (GC stands for “garbage collector” and is both the name of the class that provides access to the garbage collector and an acronym for the garbage collector (as a concept) itself). Two of the major target platforms for XNA (Windows Phone 7 and Xbox 360) use variants of the .NET Compact Framework. On both variants, the GC runs under various circumstances (Windows Phone 7 and Xbox 360). Of concern to XNA programmers is the fact that it runs automatically after a fixed amount of GC-managed memory has been allocated (currently 1MB on both systems). Many beginning XNA programmers are unaware of what constitutes GC-managed memory, though. So here’s a quick overview. In .NET, there are two different “types” of types: value types and reference types. Only reference types are managed by the garbage collector. Value types are not managed by the garbage collector and are instead managed in other ways that are implementation dependent. For purposes of XNA programming, the important point is that they are not managed by the GC and thus do not, by themselves, increment that internal 1 MB allocation counter. (n.b. Structs are value types. If you have a struct that has a reference type as a member, then that reference type, when instantiated, will still be allocated in the GC-managed memory and will thus count against the 1 MB allocation counter. Putting it in a struct doesn’t change the fact that it gets allocated on the GC heap, but the struct itself is created outside of the GC’s purview). Both value types and reference types use the keyword ‘new’ to allocate a new instance of them. Sometimes this keyword is hidden by a method which creates new instances for you, e.g. XmlReader.Create. But the important thing to determine is whether or not you are dealing with a value types or a reference type. If it’s a value type, you can use the ‘new’ keyword to allocate new instances of that type without incrementing the GC allocation counter (except as above where it’s a struct with a reference type in it that is allocated by the constructor, but there are no .NET Framework or XNA Framework value types that do this so it would have to be a struct you created or that was in some third-party library you were using for that to even become an issue). The following is a list of most all of value types you are likely to use in a generic XNA game: AudioCategory (used with XACT; not available on WP7) AvatarExpression (Xbox 360 only, but exposed on Windows to ease Xbox development) bool BoundingBox BoundingSphere byte char Color DateTime decimal double any enum (System.Enum itself is a class, but all enums are value types such that there are no GC allocations for enums) float GamePadButtons GamePadCapabilities GamePadDPad GamePadState GamePadThumbSticks GamePadTriggers GestureSample int IntPtr (rarely but occasionally used in XNA) KeyboardState long Matrix MouseState nullable structs (anytime you see, e.g. int? something, that ‘?’ denotes a nullable struct, also called a nullable type) Plane Point Quaternion Ray Rectangle RenderTargetBinding sbyte (though I’ve never seen it used since most people would just use a short) short TimeSpan TouchCollection TouchLocation TouchPanelCapabilities uint ulong ushort Vector2 Vector3 Vector4 VertexBufferBinding VertexElement VertexPositionColor VertexPositionColorTexture VertexPositionNormalTexture VertexPositionTexture Viewport So there you have it. That’s not quite a complete list, mind you. For example: There are various structs in the .NET framework you might make use of. I left out everything from the Microsoft.Xna.Framework.Graphics.PackedVector namespace, since everything in there ventures into the realm of advanced XNA programming anyway (n.b. every single instantiable thing in that namespace is a struct and thus a value type; there are also two interfaces but interfaces cannot be instantiated at all and thus don’t figure in to this discussion). There are so many enums you’re likely to use (PlayerIndex, SpriteSortMode, SpriteEffects, SurfaceFormat, etc.) that including them would’ve flooded the list and reduced its utility. So I went with “any enum” and trust that you can figure out what the enums are (and it’s rare to use ‘new’ with an enum anyway). That list also doesn’t include any of the pre-defined static instances of some of the classes (e.g. BlendState.AlphaBlend, BlendState.Opaque, etc.) which are already allocated such that using them doesn’t cause any new allocations and therefore doesn’t increase that 1 MB counter. That list also has a few misleading things. VertexElement, VertexPositionColor, and all the other vertex types are structs. But you’re only likely to ever use them as an array (for use with VertexBuffer or DynamicVertexBuffer), and all arrays are reference types (even arrays of value types such as VertexPositionColor[ ] or int[ ]). * So that’s it for now. The note below may be a bit confusing (it deals with how the GC works and how arrays are managed in .NET). If so, you can probably safely ignore it for now but feel free to ask any questions regardless. * Arrays of value types (where the value type doesn’t contain any reference type members) are much faster for the GC to examine than arrays of reference types, so there is a definite benefit to using arrays of value types where it makes sense. But creating arrays of value types does cause the GC’s allocation counter to increase. Indeed, allocating a large array of a value type is one of the quickest ways to increment the allocation counter since a .NET array is a sequential block of memory. An array of reference types is just a sequential block of references (typically 4 bytes each) while an array of value types is a sequential block of instances of that type. So for an array of Vector3s it would be 12 bytes each since each float is 4 bytes and there are 3 in a Vector3; for an array of VertexPositionNormalTexture structs it would typically be 32 bytes each since it has two Vector3s and a Vector2. (Note that there are a few additional bytes taken up in the creation of an array, typically 12 but sometimes 16 or possibly even more, which depend on the implementation details of the array type on the particular platform the code is running on).

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >