Search Results

Search found 2670 results on 107 pages for 'review'.

Page 72/107 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Books/resources for help with extracting useful feedback from clients?

    - by Eric
    I'm a web application developer looking for a book or something similar that can help with effectively communicating with clients who have a very vague or unrealistic idea of what they'd like out of the work I'm doing. Some fictional, though not by much, examples of situations: Clients who are not familiar with using the Internet, and insist on features that are not even remotely feasible (ex. time travel) Clients who are unable to express accurately what they're looking for (ex. "I know that's what I said and signed off on, but it's not what I meant") Clients who refuse to attend meetings or review sessions to answer questions or define requirements (which makes any agile development impossible) For the most part, I'm trying to find best practices for how to handle these kinds of things on a team-building level. The best ways to effectively address serious project roadblocks without sounding like a total jerk. Any recommendations for reading material on this topic?

    Read the article

  • Method for demonstrating iPad application in online meetings

    - by competent_tech
    We have recently developed an iPad application and now need to start demonstrating it to customers and prospects as part of our overall product suite during webinars. As part of our Agile methodology, we also need to periodically review the application with key customers without having to distribute it since the application is not a standalone application and requires a connection to web services installed at each customer site. We have searched high and low for any solution that doesn't involve rooting the device but have been unable to find one. The most common suggestion seems to be to point a webcam at the device, but that comes across as very unprofessional. I know that there are VGA out adapters that can be plugged into the iPad and we have used these to present through a projector when the customer is physically present, but this is a relatively rare occurrence. Perhaps there are solutions that we are unaware of that can be used to send VGA output back into a desktop device for screen sharing?

    Read the article

  • Server Error in '/' Application.

    - by Surya sasidhar
    hi, in my application when i click on one of my button in the page it is giving error like this.. Could not find a part of the path 'V:\User\EnterTrailorVideos\luck.swf'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.DirectoryNotFoundException: Could not find a part of the path 'V:\User\EnterTrailorVideos\luck.swf'. There is No folder ("EnterTrailorVideos") in my project. But it is showing like this can you help me

    Read the article

  • website inserting pics

    - by onfire4JesusCollins
    Hello i am getting this error message : Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: Line 7: Line 8: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Line 9: UserIdValue.Text = Membership.GetUser().ProviderUserKey.ToString() Line 10: cannotUploadImageMessage.Visible = False Line 11: End Sub can someone help me with this?

    Read the article

  • visual studio asp.net mvc, changing target framework

    - by mike
    Hello there Im have just changed the target framework for an asp.net mvc project from target framework 4 t 3.5, I keep getting this error when I try to debug, or go to any controller action 'HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. ' Any ideas why I keep getting this error? as soon as I revert back to .net 4.0, everything is fine ps could be an issue with routing as the global.asax Application_Start() is not not hit hence route entries are not registered Thanks

    Read the article

  • SQL Developer Database Diff – Compare Objects From Multiple Schemas

    - by thatjeffsmith
    Ever wonder why Database Diff isn’t called Schema Diff? One reason is because SQL Developer allows you select objects from more than one schema in the ‘Source’ connection for the compare. Simply use the ‘More’ dialog view and select as many tables from as many different schemas as you require Now, before you get around to testing this – as you should never believe what I say, trust but verify – two things you need to know: I’m using SQL Developer version 3.2 On the initial screen you need to use the ‘Maintain’ option Maintain tells SQL Developer to use the schema designation in the source connection to find the same corresponding object in the destination schema. Choose ‘maintain’ if you want to compare objects in the same schema in the destination but don’t have the user login for that schema. So after you’ve selected your databases, your diff preferences, and your objects – you’re ready to perform the compare and review your results. The DIFF Report Notice the highlighted text, SQL Developer is ‘maintaining’ the Schema context from the two databases. Short and sweet. That’s pretty much all there is to doing a compare with SQL Developer with multiple schemas involved. You may have noticed in some posts lately that my editor screenshots had a ‘green screen’ look and feel to them. What’s with the black background in your editors? In the SQL Developer preferences, you can set your editor color schemes. I started with the ‘Twilight’ scheme (team Jacob in case you’re wondering) and then customized it further by going with a default green font color. You could go pretty crazy in here, and I’m assuming 90% of you could care less and will just stick with the original. But for those of you who are particular about your IDE styling – go crazy! SQL Developer Editor Display Preferences

    Read the article

  • Intellitrace bug causes &ldquo;Operation could destabilize the runtime&rdquo; exception

    - by Magnus Karlsson
    We cant use it when we use simplemembership to handle external authorizations.   Server Error in '/' Application. Operation could destabilize the runtime. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Security.VerificationException: Operation could destabilize the runtime. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [VerificationException: Operation could destabilize the runtime.] DotNetOpenAuth.OpenId.Messages.IndirectSignedResponse.GetSignedMessageParts(Channel channel) +943 DotNetOpenAuth.OpenId.ChannelElements.ExtensionsBindingElement.GetExtensionsDictionary(IProtocolMessage message, Boolean ignoreUnsigned) +282 DotNetOpenAuth.OpenId.ChannelElements.<GetExtensions>d__a.MoveNext() +279 DotNetOpenAuth.OpenId.ChannelElements.ExtensionsBindingElement.ProcessIncomingMessage(IProtocolMessage message) +594 DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) +933 DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) +326 DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestBase httpRequest) +1343 DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestBase httpRequestInfo) +241 DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse() +361 DotNetOpenAuth.AspNet.Clients.OpenIdClient.VerifyAuthentication(HttpContextBase context) +136 DotNetOpenAuth.AspNet.OpenAuthSecurityManager.VerifyAuthentication(String returnUrl) +984 Microsoft.Web.WebPages.OAuth.OAuthWebSecurity.VerifyAuthenticationCore(HttpContextBase context, String returnUrl) +333 Microsoft.Web.WebPages.OAuth.OAuthWebSecurity.VerifyAuthentication(String returnUrl) +192 PrioMvcWebRole.Controllers.AccountController.ExternalLoginCallback(String returnUrl) in c:hiddenforyou lambda_method(Closure , ControllerBase , Object[] ) +127 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +250 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.Async.<>c__DisplayClass39.<BeginInvokeActionMethodWithFilters>b__33() +87 System.Web.Mvc.Async.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49() +439 System.Web.Mvc.Async.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49() +439 System.Web.Mvc.Async.<>c__DisplayClass37.<BeginInvokeActionMethodWithFilters>b__36(IAsyncResult asyncResult) +15 System.Web.Mvc.Async.<>c__DisplayClass2a.<BeginInvokeAction>b__20() +34 System.Web.Mvc.Async.<>c__DisplayClass25.<BeginInvokeAction>b__22(IAsyncResult asyncResult) +221 System.Web.Mvc.<>c__DisplayClass1d.<BeginExecuteCore>b__18(IAsyncResult asyncResult) +28 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__3(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +523 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +176 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17929

    Read the article

  • Stuck at the STARTUP [closed]

    - by Tarik Setia
    I started with "Getting started with asp mvc4 tutorial". I just created the project and when I pressed F5 I got this: Server Error in '/' Application. -------------------------------------------------------------------------------- Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.Web.WebPages.DisplayModes' from assembly 'System.Web.WebPages, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.] System.Web.Mvc.VirtualPathProviderViewEngine.GetPath(ControllerContext controllerContext, String[] locations, String[] areaLocations, String locationsPropertyName, String name, String controllerName, String cacheKeyPrefix, Boolean useCache, String[]& searchedLocations) +0 System.Web.Mvc.VirtualPathProviderViewEngine.FindView(ControllerContext controllerContext, String viewName, String masterName, Boolean useCache) +315 System.Web.Mvc.c__DisplayClassc.b__a(IViewEngine e) +68 System.Web.Mvc.ViewEngineCollection.Find(Func`2 lookup, Boolean trackSearchedPaths) +182 System.Web.Mvc.ViewEngineCollection.Find(Func`2 cacheLocator, Func`2 locator) +67 System.Web.Mvc.ViewEngineCollection.FindView(ControllerContext controllerContext, String viewName, String masterName) +329 System.Web.Mvc.ViewResult.FindView(ControllerContext context) +135 System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +230 System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) +39 System.Web.Mvc.c__DisplayClass1c.b__19() +74 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) +388 System.Web.Mvc.c__DisplayClass1e.b__1b() +72 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) +303 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +844 System.Web.Mvc.Controller.ExecuteCore() +130 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +229 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39 System.Web.Mvc.c__DisplayClassb.b__5() +71 System.Web.Mvc.Async.c__DisplayClass1.b__0() +44 System.Web.Mvc.Async.c__DisplayClass8`1.b__7(IAsyncResult _) +42 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +152 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +59 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.c__DisplayClasse.b__d() +75 System.Web.Mvc.SecurityUtil.b__0(Action f) +31 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +61 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +118 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +10303829 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +178 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17020

    Read the article

  • Example WLST Script to Obtain JDBC and JTA MBean Values

    - by Daniel Mortimer
    Introduction Following on from the blog entry "Get an Offline or Online WebLogic Domain Summary Using WLST!", I have had a request to create a smaller example which only collects a selection of JDBC (System Resource) and JTA configuration and runtime MBeans values. So, here it is. Download Sample Script You can grab the sample script by clicking here. Instructions to Run: 1. After download, extract the zip to the machine hosting the WebLogic environment. You should have three directories along with a readme.txt output Sample_Output scripts 2. In the scripts directory, find the start wrapper script startWLSTJDBCSummarizer.sh (Unix) or startWLSTJDBCSummarizer.cmd (MS Windows). Open the appropriate file in an editor and change the environment variable settings to suit your system. Example - startWLSTDomainSummarizer.cmd set WL_HOME=D:\product\FMW11g\wlserver_10.3 set DOMAIN_HOME=D:\product\FMW11g\user_projects\domains\MyDomain set WLST_OUTPUT_PATH=D:\WLSTDomainSummarizer\output\ set WLST_OUTPUT_FILE=WLST_JDBC_Summary_Via_MBeans.html call "%WL_HOME%\common\bin\wlst.cmd" WLS_JDBC_Summary_Online.py Note: The WLST_OUTPUT_PATH directory value must have a trailing slash. If there is no trailing slash, the script will error and not continue.  3. Run the shell / command line wrapper script. It should launch WLST and kick off "WLS_JDBC_Summary_Online.py". This will hit you with some prompts e.g. Is your domain Admin Server up and running and do you have the connection details? (Y /N ): Y Enter connection URL to Admin Server e.g t3://mymachine.acme.com:7001 : t3://localhost:7001 Enter weblogic username: weblogic Enter weblogic username password (function prompt 1): welcome1 (Note: the value typed in for password will not be echoed back to the console). 4. If the scripts run successfully, you should get a HTML summary in the specified output directory. See example screenshots below: Screenshot 1 - JDBC System Resource Tab Page  Screenshot 2 - JTA Tab Page 5. For the HTML to render correctly, ensure the .js and .css files provided (review the output directory created by the zip file extraction) are accessible. For example, to view the HTML locally (without using a web server), place the HTML output, jquery-ui.js, spry.js and wlstsummarizer.css in the same directory. Disclaimer This is a sample script. I have tested it against WebLogic Server 10.3.6 domains on MS Windows and Unix.  I cannot guarantee that the script will run error free or produce the expected output on your system. If you have any feedback add a comment to the blog. I will endeavour to fix any problems with my WLST code. Credits JQuery: http://jquery.com/ Spry (Adobe) : https://github.com/adobe/Spryhttp://www.red-team-design.com/cool-headings-with-pseudo-elements

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • It’s time that you ought to know what you don’t know

    - by fatherjack
    There is a famous quote about unknown unknowns and known knowns and so on but I’ll let you review that if you are interested. What I am worried about is that there are things going on in your environment that you ought to know about, indeed you have asked to be told about but you are not getting the information. When you schedule a SQL Agent job you can set it to send an email to an inbox monitored by someone who needs to know and indeed can do something about it. However, what happens if the email process isnt successful? Check your servers with this: USE [msdb] GO /* This code selects the top 10 most recent SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT TOP 10 [s].[name] , [sjh].[step_name] , [sjh].[sql_message_id] , [sjh].[sql_severity] , [sjh].[message] , [sjh].[run_date] , [sjh].[run_time] , [sjh].[run_duration] , [sjh].[operator_id_emailed] , [sjh].[operator_id_netsent] , [sjh].[operator_id_paged] , [sjh].[retries_attempted] FROM [dbo].[sysjobhistory] AS sjh INNER JOIN [dbo].[sysjobs] AS s ON [sjh].[job_id] = [s].[job_id] WHERE EXISTS ( SELECT * FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [sjh].[job_id] = [s2].[job_id] AND [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 ) AND sjh.[run_status] = 0 AND sjh.[step_id] != 0 AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [run_date])) >= @date ORDER BY [sjh].[run_date] DESC , [sjh].[run_time] DESC go USE [msdb] go /* This code summarises details of SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT [s].name , [s2].[step_id] , CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) AS [rundate] , COUNT(*) AS [execution count] FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 GROUP BY name , [s2].[step_id] , [s2].[run_date] ORDER BY [s2].[run_dateDESC] These two result sets will show if there are any SQL Agent jobs that have run on your servers that failed and failed to successfully email about the failure. I hope it’s of use to you. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • ASP.NET Dynamic Data Deployment Error

    - by rajbk
    You have an ASP.NET 3.5 dynamic data website that works great on your local box. When you deploy it to your production machine and turn on debug, you get the YSD Server Error in '/MyPath/MyApp' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Unknown server tag 'asp:DynamicDataManager'. Source Error: Line 5:  Line 6:  <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> Line 7:      <asp:DynamicDataManager ID="DynamicDataManager1" runat="server" AutoLoadForeignKeys="true" /> Line 8:  Line 9:      <h2><%= table.DisplayName%></h2> Probable Causes The server does not have .NET 3.5 SP1, which includes ASP.NET Dynamic Data, installed. Download it here. The third tagPrefix shown below is missing from web.config <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.DynamicData" assembly="System.Web.DynamicData, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls></pages>     Hope that helps!

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • Illegal characters for SharePoint 2010 Content Type name

    - by Kelly Jones
    Quick tip: you can’t include a backslash in the name of the SharePoint 2010 Content Type.  In fact, there are several illegal characters:  \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. What, you didn’t know that after entering one of these characters in the name?  Is it because you saw this screen: Oh, that’s right….you need to turn off custom errors in the layouts folder…See this blog post for details and you’ll also need to turn off for the web application. Once you do that, you’ll see this: I wonder why the SharePoint team just doesn’t let the user know that the content type name contains illegal characters before the user hits the create button. Here’s a copy of the complete error (for the search engines): Server Error in '/' Application. -------------------------------------------------------------------------------- The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.  Stack Trace: [SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab.]    Microsoft.SharePoint.SPContentType.ValidateName(String name) +27419522    Microsoft.SharePoint.SPContentType.ValidateNameWithResource(String strVal, String& strLocalized) +423    Microsoft.SharePoint.SPContentType.set_Name(String value) +151    Microsoft.SharePoint.SPContentType.Initialize(SPContentType parentContentType, SPContentTypeCollection collection, String name) +112    Microsoft.SharePoint.SPContentType..ctor(SPContentType parentContentType, SPContentTypeCollection collection, String name) +132    Microsoft.SharePoint.ApplicationPages.ContentTypeCreatePage.BtnOK_Click(Object sender, EventArgs e) +497    System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115    System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140    System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981   -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • Need help with software licensing? Read on&hellip;

    - by juanlarios
    Figuring out which software licensing options best suit your needs while being cost-effective can be confusing. Some businesses end up making their purchases through retail stores which means they miss out on volume licensing opportunities and others may unknowingly be using unlicensed software which means their business may be at risk. So let me help you make the best decision for your situation. You may want to review this blog post that lays out licensing basics for any organization that needs to license software for more than 5 or less than 250 devices or users. It details the different ways you can buy a license and what choices are available for volume licensing, which can give you pricing advantages and provide flexible options for your business. As technology evolves and more organizations move to online services such as Microsoft Office 365, Microsoft Dynamics CRM Online, Windows Azure Platform, Windows Intune and others, it’s important to understand how to purchase, activate and use online service subscriptions to get the most out of your investment. Once purchased through a volume licensing agreement or the Microsoft Online Subscription Program, these services can be managed through web portals: · Online Services Customer Portal (Microsoft Office 365, Microsoft Intune) · Dynamics CRM Online Customer Portal (Microsoft Dynamics CRM Online) · Windows Azure Customer Portal (Windows Azure Platform) · Volume Licensing Service Center (other services) Learn more >> Licensing Resources: The SMB How to Buy Portal – receive clear purchasing and licensing information that is easy to understand in order to help facilitate quick decision making. Microsoft License Advisor (MLA) – Use MLA to research Microsoft Volume Licensing products, programs and pricing. Volume Licensing Service Center (VLSC) – Already have a volume License? Use the VLSC to get you easy access to all your licensing information in one location. Online Services – licensing information for off-premise options. Windows 7 Comparison: – Compare versions of Windows and find out which one is right for you. Office 2010 Comparison: – Find out which Office suite is right for you. Licensing FAQs – Frequently Asked Questions About Product Licensing. Additional Resources You May Find Useful: · TechNet Evaluation Center Try some of our latest Microsoft products For free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. · Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure.   · AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers.  Ask questions and participate real-time or watch the on-demand recording.

    Read the article

  • How to create Adhoc workflow in UCM

    - by vijaykumar.yenne
    UCM has an inbuilt workflow engine that can handle document centric workflow approval/rejection process to ensure the right set of assets go into the repository. Anybody who has gone through the documentation is aware that there are two types of work flows that can be defined using the Workflow Admin applet in UCM namely Criteria and Basic While criteria is an Automatic workflow  process based on certain metadata attributes (Security Group and One of the Metadata Fields) , basic workflow is a manual workflow that need to be initiated by the admin. Any workflow  that can be put on the white board can be translated into the UCM wokflow process and there are concepts like sub workflows, tokens, events. idoc scripting that be introduced to handle any kind of complex workflows. There is a specific Workflow Implementation guide that explains the concepts in detail. One of the standard queries i come across is how to handle adhoc workflows where at the time of contributing the content, the contributors would like to decide on the workflow to be initiated and the users to be picked for approval in each step, hence this post.This is what i want to acheive, i would like to display on my Checkin Screen on the kind of workflows that a contributor could choose from:Based on the Workflow the contributor chooses, the other metadata fields (Step One, Step Two and Step Three)  need to be filled in and these fields decide who the approvers are going to be.1. Create a criteria workflow called One_Step_Review2.create two tokens StepOne <$wfAddUser(xWorkflowStepOne, "user")$>,  OrginalAuthor  <$wfAddUser(wfGet("OriginalAuthor"), "user")$>View image3.create two steps in the work flow created (One_Step_Review)View image4. Edit Step1 of the Workflow and add the Step One token and select the review permissionView image5. In the exit conditions tab have atleast One reveiwerView image6. In the events tab add an entry event <$wfSet("OriginalAuthor",dDocAuthor)$> to capture the contributor who shall be notified in the second step of the workflowView image7. Add the second step Notify_Author to the workflow8. Add the original author token to the above step9.  Enable the workflow10. Open the configration manager applet and create a Metadata field Workflow with option list enabled and add the list of values as show hereView image11. Create another metadata field WorkflowStepOne with option list configured to the Users View. This shall display all the users registered with UCM, which when selected shall be associated with the tokens associated with the workflow. Refer the above token.View imageAs indicated in the above steps you could create multiple work flows and associate the custom metadata field values to the tokens so that the contributors can decide who can approve their  content.

    Read the article

  • PERT shows relationships between defined tasks in a project without taking into consideration a time line

    The program evaluation and review technique (PERT) shows relationships between defined tasks in a project without taking into consideration a time line. This chart is an excellent way to identify dependencies of tasks based on other tasks. This chart allows project managers to identify the critical path of a project to minimize any time delays to the project. According to Craig Borysowich in his article “Pros & Cons of the PERT/CPM Method stated the following advantages and disadvantages: “PERT/CPM has the following advantages: A PERT/CPM chart explicitly defines and makes visible dependencies (precedence relationships) between the WBS elements, PERT/CPM facilitates identification of the critical path and makes this visible, PERT/CPM facilitates identification of early start, late start, and slack for each activity, PERT/CPM provides for potentially reduced project duration due to better understanding of dependencies leading to improved overlapping of activities and tasks where feasible.  PERT/CPM has the following disadvantages: There can be potentially hundreds or thousands of activities and individual dependency relationships, The network charts tend to be large and unwieldy requiring several pages to print and requiring special size paper, The lack of a timeframe on most PERT/CPM charts makes it harder to show status although colors can help (e.g., specific color for completed nodes), When the PERT/CPM charts become unwieldy, they are no longer used to manage the project.” (Borysowich, 2008) Traditionally PERT charts are used in the initial planning of a project like in a project that is utilizing the waterfall approach. Once the chart was created then project managers could further analyze this data to determine the earliest start time for each stage in the project. This is important because this information can be used to help forecast resource needs during a project and where in the project. However, the agile environment can approach this differently because of their constant need to be in contact with the client and the other stakeholders.  The PERT chart can also be used during project iteration to determine what is to be worked on next, such as a prioritized To-Do list a wife would give her husband at the start of a weekend. In my personal opinion, the COTS-centric environment would not really change how a company uses a PERT chart in their day to day work. The only thing I can is that there would be less tasks to include in the chart because the functionally milestones are already completed when the components are purchased. References: http://www.netmba.com/operations/project/pert/ http://web2.concordia.ca/Quality/tools/20pertchart.pdf http://it.toolbox.com/blogs/enterprise-solutions/pros-cons-of-the-pertcpm-method-22221

    Read the article

  • Silverlight Cream for December 28, 2010 -- #1017

    - by Dave Campbell
    In this Issue: Davide Zordan, Alex Golesh, Michael S. Scherotter, Andrej Tozon, Alex Knight, Jeff Blankenburg(-2-), Jeremy Likness, and Laurent Bugnion. Above the Fold: Silverlight: "My “What’s new in Silverlight 4 demo” app" Andrej Tozon WP7: "Taking a screenshot from within a Silverlight #WP7 application" Laurent Bugnion Expression Blend: "PathListBox: getting started" Alex Knight Shoutouts: If you haven't seen this SurfCube app demo on YouTube yet... check it out now: SurfCube V1.0 Windows Phone 7 Browser Want to get a free WP7 class from Shawn Wildermuth? Check this out: Webinar: Writing your first Windows Phone 7 Application Koen Zwikstra announed the next preview of his great tool: Silverlight Spy Preview 2 From SilverlightCream.com: Using the Multi-Touch Behavior in a Windows Phone 7 Multi-Page application Davide Zordan has a post up responding to questions he receives about multi-touch on WP7 in applications spanning more than one page. Silverlight for Windows Phone 7 Quick Tip: Fix missing icons while using DatePicker/TimePicker controls Alex Golesh discusses the use of the DatePicker control from the WP7 toolkit and found an unpleasant surprise associated with the Done/Cancel icons in the ApplicationBar, and has a solution for us. Updated SMF Thumbnail Scrubbing Sample Code Michael S. Scherotter has a post up about an update he's done to Silverlight 4 of code that allows thumbnail views of a video while 'scrubbing' ... don't know what that is? read the post :) My “What’s new in Silverlight 4 demo” app Andrej Tozon admits he's a little behind with this post, but as he points out, it might be a good time to review Silverlight 4 features, on the eve of 5. PathListBox: getting started One half the Knight team -- Alex Knight this time, has the first post of a series on the PathListBox up ... some real Expression Blend goodness. What I Learned in WP7 – Issue #9 Two more from Jeff Blankenburg today, in his number 9, he starts off demonstrating passing data between pages when navigating and fnishes up with some excellent info for submitting apps to the marketplace. What I Learned in WP7 – #Issue 10 Jeff Blankenburg's number 10 elaborates on the query string data he discussed in number 9. Using Sterling in Windows Phone 7 Applications Who better than the author?? Jeremy Likness has an end-to-end WP7/Sterling app up on his blog... begin with downloading Sterling, discuss what's needed to support Tombstoning, even custom serialization. Taking a screenshot from within a Silverlight #WP7 application Laurent Bugnion has a post up describing something people have been looking for: getting a screenshot of a WP7 application's page. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Part 2: The Customization Lifecycle

    - by volker.eckardt(at)oracle.com
    To understand the challenges when working with Customizations better, please allow me to explain my understanding from the Customization Lifecycle.  The starting point is the functional GAP list. Any GAP can lead to a customization (but not have to). The decision is driven by priority, gain, costs, future functionality, accepted workarounds etc. Let's assume the customization has been accepted as such - including estimation. (Otherwise this blog would not have any value)Now the customization life-cycle starts and could look like this:-    Functional specification-    Technical specification-    Technical development-    Functional setup-    Module Test-    System Test-    Integration Test (if required)-    Acceptance Test-    Production mode-    Usage-    10 x Rework-    10 x Retest -    2 x Upgrade-    2 x Upgrade Test-    Usage-    10 x Rework-    10 x Retest -    1 x Upgrade-    1 x Upgrade Test-    Usage-    Review for Retirement-    Accepted Retirement-    De-installationWhat I like to highlight herewith is that any material and documentation you create upfront or during the first phases will usually be used multiple times, partial or complete, will be enhanced, reviewed, retested. The better the quality right from the beginning is, the better we can perform the next steps.What I see very often is the wish to remove a customization, our customers are upgrading and they like to get at least some of the customizations replaced with standard functionality. To be able to support this process best, the customization documentation should contain at least the following key information: What is/are the business process(es) where this customization is used or linked to?Who was involved in the different customization phases?What are the objects comprising the customization?What is the setup necessary for the customization?What setup comes with the customization, what has to be done via other tools or manually?What are the test steps and test results (in all test areas)?What are linked customizations? What is the customization complexity?How is this customization classified?Which technologies were used?How many days were needed to create/test/upgrade the customization?Etc.If all this is available, a replacement / retirement can be done much more efficient and precise, or an estimation and upgrade itself can be executed with much better support.In the following blog entries I will explain in more detail why we suggest tracking such information, by whom this task shall be done and how.Volker Eckardt

    Read the article

  • Link To Work Item &ndash; Visual Studio extension to link changeset(s) to work item directly from VS history window

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/08/11/link-to-work-item-ndash-visual-studio-extension-to-link.aspxBy linking work items and other objects, you can track related work, dependencies, and changes made over time. As the following illustration shows, specific link types are used to track specific work items and actions. (– via MSDN) While making a check-in, Visual Studio 2013 provides you a quick way to search and assign a work item via pending changes section in Team Explorer. However, if you forget to assign the work item during your check-in, things really get cumbersome as Visual Studio does not provide an easy way of assigning. For example, you usually have to open the work item and then link the changeset which involves approx. 7-8 mouse clicks. Now, you will really feel the difficulty if you have to assign work item to multiple changesets, you have to repeat the same steps again. Hence, I decided to develop a small Visual Studio extension to perform this action of linking work item to changeset bit easier. How to use the extension? First, download and install the extension from VS Gallery (Supports VS 2013 Professional and above). Once you install, you will see a new "Link To Work Item" menu item when you right click on a changeset in history window. Clicking Link To Work Item menu, will open a new dialog with which you can search for a work item. As you can see in below screenshot, this dialog displays the search result and also the type of the work item. You can also open work item from this dialog by right clicking on the work item and clicking 'Open'. Finally, clicking Save button, will actually link the work item to changeset. One feature which I think helpful, is you can select multiple changesets from history window and assign the work item to all those changesets.  To summarize the features Directly assign work items to changesets from history window Assign work item to multiple changesets Know the type of the work item before assigning. Open the work item from search results It also supports all default Visual Studio themes. Below is a small demo showcasing the working of this extension. Finally, if you like the extension, do not forget to rate and review the extension in VS Gallery. Also, do not hesitate to provide your suggestions, improvements and any issues you may encounter via github.

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on HP-UX PA-RISC

    - by John Abraham
    As a follow up to our original announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following HP-UX platforms: Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) : HP-UX PA-RISC (64-bit) (11.31) Release 12 (12.0.4 and higher, 12.1.1 and higher): HP-UX PA-RISC (64-bit) (11.31) This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 761570.1 - Database Preparation Guidelines for an Oracle E-Business Suite Release 12.1.1 Upgrade MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites.

    Read the article

  • Oracle Financial Analytics for SAP Certified with Oracle Data Integrator EE

    - by denis.gray
    Two days ago Oracle announced the release of Oracle Financial Analytics for SAP.  With the amount of press this has garnered in the past two days, there's a key detail that can't be missed.  This release is certified with Oracle Data Integrator EE - now making the combination of Data Integration and Business Intelligence a force to contend with.  Within the Oracle Press Release there were two important bullets: ·         Oracle Financial Analytics for SAP includes a pre-packaged ABAP code compliant adapter and is certified with Oracle Data Integrator Enterprise Edition to integrate SAP Financial Accounting data directly with the analytic application.  ·         Helping to integrate SAP financial data and disparate third-party data sources is Oracle Data Integrator Enterprise Edition which delivers fast, efficient loading and transformation of timely data into a data warehouse environment through its high-performance Extract Load and Transform (E-LT) technology. This is very exciting news, demonstrating Oracle's overall commitment to Oracle Data Integrator EE.   This is a great way to start off the new year and we look forward to building on this momentum throughout 2011.   The following links contain additional information and media responses about the Oracle Financial Analytics for SAP release. IDG News Service (Also appeared in PC World, Computer World, CIO: "Oracle is moving further into rival SAP's turf with Oracle Financial Analytics for SAP, a new BI (business intelligence) application that can crunch ERP (enterprise resource planning) system financial data for insights." Information Week: "Oracle talks a good game about the appeal of an optimized, all-Oracle stack. But the company also recognizes that we live in a predominantly heterogeneous IT world" CRN: "While some businesses with SAP Financial Accounting already use Oracle BI, those integrations had to be custom developed. The new offering provides pre-built integration capabilities." ECRM Guide:  "Among other features, Oracle Financial Analytics for SAP helps front-line managers improve financial performance and decision-making with what the company says is comprehensive, timely and role-based information on their departments' expenses and revenue contributions."   SAP Getting Started Guide for ODI on OTN: http://www.oracle.com/technetwork/middleware/data-integrator/learnmore/index.html For more information on the ODI and its SAP connectivity please review the Oracle® Fusion Middleware Application Adapters Guide for Oracle Data Integrator11g Release 1 (11.1.1)

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • A toolset for self improvement and learning [closed]

    - by Sebastian
    Possible Duplicate: I’m having trouble learning I've been working as an IT consultant for 1½ years and I am very passionate about programming. Before that I studied MSc Software Engineering and had both a part time job as a developer for a big telecom company. During that time I also took extra courses and earned a SCJP certificate. I have been continuously reading a lot of books during the last 3½ years. Now to my problem. I want to continue learning and become a really, really good developer. Apart from my daytime job as a full time java developer I have taken university courses in, for me, new languages and paradigms. Most recently, android game development and then functional programming with Scala. I've read books, went to conferences and had a couple of presentations for internal training purposes in our local office. I want to have some advice from other people who have previously been in my situation or currently are. What are you guys doing to keep improving yourselves? Here is some things that I have found are working for me: Reading books I've mostly read books about best practices for programming, OO-design, refactoring, design patterns, tdd. Software craftmanship if you like. I keep a reading list and my current book is Apprenticeship patterns. Taking courses In my country we have a really good system for taking online distance courses. I have also taken one course at coursera.org and a highly recommend that platform. Ive looked at courses at oreilly.com, industriallogic, javaspecialists.eu and they seem to be okay. If someone gives these type of courses a really good review, I can probably convince my boss. Workshops that span over a couple of days would probably be harder, but Ive seen that uncle Bob will have one about refactoring and tdd in 6months not far from here.. :) Are their possibly some online learning platforms that I dont know about? Educational videos I've bought uncle bobs videos from cleancoders.com and I highly recommend them. The only thing I dont like is that they are quite expensive and that he talks about astronomy for ~10 minutes in every episode. Getting certified I had a lot of fun and learned a lot when I studied for the SCJP. I have also done some preparation for the microsoft equivalent but never went for it. I think it is a good when selling yourself as a newly graduated student and also will boost your knowledge if your are interested in it. Now I would like others to start sharing their experiences and possibly give me some advice! BR Sebastian

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >