Search Results

Search found 691 results on 28 pages for 'registering'.

Page 25/28 | < Previous Page | 21 22 23 24 25 26 27 28  | Next Page >

  • Prototype JS swallows errors in dom:loaded, and ajax callbacks?

    - by WishCow
    I can't figure out why prototype suppressess the error messages in the dom:loaded event, and in AJAX handlers. Given the following piece of HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Conforming XHTML 1.1 Template</title> <script type="text/javascript" src="prototype.js"></script> <script type="text/javascript"> document.observe('dom:loaded', function() { console.log('domready'); console.log(idontexist); }); </script> </head> <body> </body> </html> The domready event fires, I see the log in the console, but there is no indication of any errors whatsoever. If you move the console.log(idontexist); line out of the handler, you get the idontexist is not defined error in the console. I find it a little weird, that in other event handlers, like 'click', you get the error message, it seems that it's only the dom:loaded that has this problem. The same goes for AJAX handlers: new Ajax.Request('/', { method: 'get', onComplete: function(r) { console.log('xhr complete'); alert(youwontseeme); } }); You won't see any errors. This is with prototype.js 1.6.1, and I can't find any indication of this behavior in the docs, nor a way to enable error reporting in these handlers. I have tried stepping through the code with FireBug's debugger, and it seems to jump to a function on line 53 named K, when it encounters the missing variable in the dom:loaded handler: K: function(x) { return x } But how? Why? When? I can't see any try/catch block there, how does the program flow end up there? I know that I can make the errors visible by packing my dom:ready handler(s) in try/catch blocks, but that's not a very comfortable option. Same goes for registering a global onException handler for the AJAX calls. Why does it even suppress the errors? Did someone encounter this before?

    Read the article

  • Spring validation errors not displayed

    - by Art Vandelay
    I have the following situation. I have a validator to validate my command object and set the errors on a Errors object to be displayed in my form. The validator is invoked as expected and works okay, but the errors i set on the Errors objects are not displayed, when i am sent back to my form because of the validation errors. Validator: public void validate(Object obj, Errors err) { MyCommand myCommand = (MyCommand) obj; int index = 0; for (Field field : myCommand.getFields()) { if (field.isChecked()) { if ((field.getValue() == null) || (field.getValue().equals(""))) { err.rejectValue("fields[" + index + "].value", "errors.missing"); } } index++; } if (myCommand.getLimit() < 0) { err.rejectValue("limit", "errors.invalid"); } } Command: public class MyCommand { private List<Field> fields; private int limit; //getters and setters } public class Field { private boolean checked; private String name; private String value; //getters and setters } Form: <form:form id="myForm" method="POST" action="${url}" commandName="myCommand"> <c:forEach items="${myCommand.fields}" var="field" varStatus="status"> <form:checkbox path="fields[${status.index}].checked" value="${field.checked}" /> <c:out value="${field.name}" /> <form:input path="fields[${status.index}].value" /> <form:errors path="fields[${status.index}].value" cssClass="error" /></td> <form:hidden path="fields[${status.index}].name" /> </c:forEach> <fmt:message key="label.limit" /> <form:input path="limit" /> <form:errors path="limit" cssClass="error" /> </form:form> Controller: @RequestMapping(value = REQ_MAPPING, method = RequestMethod.POST) public String onSubmit(Model model, MyCommand myCommand, BindingResult result) { // validate myCommandValidator.validate(myCommand, result); if (result.hasErrors()) { model.addAttribute("myCommand", myCommand); return VIEW; } // form is okay, do stuff and redirect } Could it be that the paths i give in the validator and tag are not correct? The validator validates a command object containing a list of objects, so that's why i give a index on the list in the command object when registering an error message (for example: "fields["+index+"]".value). Or is it that the Errors object containing the errors is not available to my view? Any help is welcome and appreciated, it might give me a hint or point me in right direction.

    Read the article

  • iPhone scrollView won't scroll after keyboard is closed

    - by Rob
    I was trying to implement a way to have the UITextField move up when the keyboard opened so that the user could see what they were typing but after implementing the code - it locks my scroll. The view will scroll when the user first gets to that view but after inputting text and closing the keyboard it locks the view. I suspect that it is a problem somewhere in this code: - (void) viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; NSLog(@"Registering for keyboard events"); // Register for the events [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector (keyboardDidShow:) name: UIKeyboardDidShowNotification object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector (keyboardDidHide:) name: UIKeyboardDidHideNotification object:nil]; // Setup content size scrollView.contentSize = CGSizeMake(SCROLLVIEW_CONTENT_WIDTH, SCROLLVIEW_CONTENT_HEIGHT); //Initially the keyboard is hidden keyboardVisible = NO; } -(void) viewWillDisappear:(BOOL)animated { NSLog (@"Unregister for keyboard events"); [[NSNotificationCenter defaultCenter] removeObserver:self]; } //**-------------------------------** - (void) keyboardDidShow: (NSNotification *)notif { NSLog(@"Keyboard is visible"); // If keyboard is visible, return if (keyboardVisible) { NSLog(@"Keyboard is already visible. Ignore notification."); return; } // Get the size of the keyboard. NSDictionary* info = [notif userInfo]; NSValue* aValue = [info objectForKey:UIKeyboardBoundsUserInfoKey]; CGSize keyboardSize = [aValue CGRectValue].size; // Save the current location so we can restore // when keyboard is dismissed offset = scrollView.contentOffset; // Resize the scroll view to make room for the keyboard CGRect viewFrame = scrollView.frame; viewFrame.size.height -= keyboardSize.height; scrollView.frame = viewFrame; CGRect textFieldRect = [activeField frame]; textFieldRect.origin.y += 10; [scrollView scrollRectToVisible:textFieldRect animated:YES]; NSLog(@"ao fim"); // Keyboard is now visible keyboardVisible = YES; } -(void) keyboardDidHide: (NSNotification *)notif { // Is the keyboard already shown if (!keyboardVisible) { NSLog(@"Keyboard is already hidden. Ignore notification."); return; } // Reset the frame scroll view to its original value scrollView.frame = CGRectMake(0, 0, SCROLLVIEW_CONTENT_WIDTH, SCROLLVIEW_CONTENT_HEIGHT); // Reset the scrollview to previous location scrollView.contentOffset = offset; // Keyboard is no longer visible keyboardVisible = NO; } -(BOOL) textFieldShouldBeginEditing:(UITextField*)textField { activeField = textField; return YES; } I defined the size above the #imports as: #define SCROLLVIEW_CONTENT_HEIGHT 1000 #define SCROLLVIEW_CONTENT_WIDTH 320 Really not sure what the issue is.

    Read the article

  • Cancel outlook meeting requests via MailMessage in C#

    - by BTmuney
    I'm creating an application using the ASP.NET MVC 1 framework in C#, where I have users that register for events. Upon registering, I create an outlook meeting request public string BuildMeetingRequest(DateTime start, DateTime end, string attendees, string organizer, string subject, string description, string UID, string location) { System.Text.StringBuilder sw = new System.Text.StringBuilder(); sw.AppendLine("BEGIN:VCALENDAR"); sw.AppendLine("VERSION:2.0"); sw.AppendLine("METHOD:REQUEST"); sw.AppendLine("BEGIN:VEVENT"); sw.AppendLine(attendees); sw.AppendLine("CLASS:PUBLIC"); sw.AppendLine(string.Format("CREATED:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)); sw.AppendLine("DESCRIPTION:" + description); sw.AppendLine(string.Format("DTEND:{0:yyyyMMddTHHmmssZ}", end)); sw.AppendLine(string.Format("DTSTAMP:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)); sw.AppendLine(string.Format("DTSTART:{0:yyyyMMddTHHmmssZ}", start)); sw.AppendLine("ORGANIZER;CN=\"NAME\":mailto:" + organizer); sw.AppendLine("SEQUENCE:0"); sw.AppendLine("UID:" + UID); sw.AppendLine("LOCATION:" + location); sw.AppendLine("SUMMARY;LANGUAGE=en-us:" + subject); sw.AppendLine("BEGIN:VALARM"); sw.AppendLine("TRIGGER:-PT720M"); sw.AppendLine("ACTION:DISPLAY"); sw.AppendLine("DESCRIPTION:Reminder"); sw.AppendLine("END:VALARM"); sw.AppendLine("END:VEVENT"); sw.AppendLine("END:VCALENDAR"); return sw.ToString(); } And once built, I use MailMessage, with an alternate view to send out the meeting request: meetingInfo = BuildMeetingRequest(start, end, attendees, organizer, subject, description, UID, location); System.Net.Mime.ContentType mimeType = new System.Net.Mime.ContentType("text/calendar; method=REQUEST"); AlternateView ICSview = AlternateView.CreateAlternateViewFromString(meetingInfo,mimeType); MailMessage message = new MailMessage(); message.To.Add(to); message.From = new MailAddress(from); message.AlternateViews.Add(ICSview); SmtpClient client = new SmtpClient(); client.Send(message); When users get the email in outlook, it shows up as a meeting request, as opposed to a normal email. This works well for sending out updates to the meeting request as well. The only problem that I am having is that I do not know the proper format for sending out a cancellation. I've attempted to examine some meeting request cancellations in text editors and can't seem to pinpoint the difference in the format between cancelling/creating. Any help on this is greatly appreciated.

    Read the article

  • Share a "deep link" from a Windows 8/WinRT application

    - by Dave Parker
    I have searched using many different terms and phrases, and waded through many pages of results, but I have (remarkably) not seen anyone else addressing, even asking, about, this issue. So here goes... Ultimate Goal: Allow a user viewing a content-based page (may contain both text and images) within a Windows Store app to share that content with someone else. Description I am working on taking a fair amount of content and making it available for browsing/navigating as a Windows 8/WinRT/Windows Store (we need a consistent name here) application. One of the desired features is to take advantage of the Share Charm, such that someone viewing a page could share that page with someone else. The ideal behavior is for the application to implement the Share Source contract which would share an email message that contained some explanatory text, a link to get the app from the Windows Store, and a "deep link" into the shared page in the application. Solutions Considered We had originally looked at just generating a PDF representation of the page, but there are very few external libraries that would work under WinRT, and having to include externally licensed code would be problematic as well. Writing our own PDF generation code would out of scope. We have also considered generating a Word document or PowerPoint slide using OpenXML, but again, we run up against the limitaions of WinRT. In this case, it is highly unlikely the OpenXML SDK is useable in a WinRT application. Another thought was to pre-generate all of the pages as .pdf files, store them as resources, and when the Share Charm is invoked, share the .pdf file associated with the current page. The problem here is the application will have at least 150 content pages, and depending on how we break the content down, up to over 600. This would likely cause serious bloat. Where We Are At Thus we have come to sharing URIs. From what I can tell, though, the "deep linking" feature is only intended for use on Secondary Tiles tied to your application. Another avenue I considered was registering a protocol like, "my-special-app:" with the OS and having it fire up the application but that would require HKCR registry access, which is outside the WinRT sandbox. If it matters, we are leaning towards an HTML/JS application, rather than XAML/C#, because the converted content will all be in HTML and the WebView control in WinRT is fairly limited. This decision is not yet final, though. Conclusion So, is this possible, and if so, how would it be done or where can I find documentation on it? Thanks, Dave Parker

    Read the article

  • Application error with MyFaces 1.2: java.lang.IllegalStateException: No Factories configured for this Application.

    - by IgorB
    For my app I'm using Tomcat 6.0.x and Mojarra 1.2_04 JSF implementation. It works fine, just I would like to switch now to MyFaces 1.2_10 impl of JSF. During the deployment of my app a get the following error: ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/myApp]] StandardWrapper.Throwable java.lang.IllegalStateException: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; <listener> <listener-class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> at javax.faces.FactoryFinder.getFactory(FactoryFinder.java:106) at javax.faces.webapp.FacesServlet.init(FacesServlet.java:137) at org.apache.myfaces.webapp.MyFacesServlet.init(MyFacesServlet.java:113) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1172) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:992) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4058) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4371) ... Here is part of my web.xml configuration: <servlet> <servlet-name>Faces Servlet</servlet-name> <!-- <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> --> <servlet-class>org.apache.myfaces.webapp.MyFacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> ... <listener> <listener- class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> Has anyone experienced similar error, and what should I do i order to fix it? Thanx!

    Read the article

  • IOC/Autofac problem

    - by Krazzy
    I am currently using Autofac, but am open to commentary regarding other IOC containers as well. I would prefer a solution using Autofac if possible. I am also somewhat new to IOC so I may be grossly misunderstanding what I should be using an IOC container for. Basically, the situation is as follows: I have a topmost IOC container for my app. I have a tree of child containers/scopes where I would like the same "service" (IWhatever) to resolve differently depending on which level in the tree it is resolved. Furthermore if a service is not registered at some level in the tree I would like the tree to be transversed upward until a suitable implementation is found. Furthermore, when constructing a given component, it is quite possible that I will need access to the parent container/scope. In many cases the component that I'm registering will have a dependency on the same or a different service in a parent scope. Is there any way to express this dependency with Autofac? Something like: builder.Register(c=> { var parentComponent = ?.Resolve<ISomeService>(); var childComponent = new ConcreteService(parentComponent, args...); return childComponent; }).As<ISomeService>(); I can't get anything similar to the above pseudocode to work for serveral reasons: A) It seems that all levels in the scope tree share a common set of registrations. I can't seem to find a way to make a given registration confined a certain "scope". B) I can't seem to find a way to get a hold of the parent scope of a given scope. I CAN resolve ILifetimeScope in the container and then case it to a concrete LifetimeScope instance which provides its parent scope, but I'm guessing it is probably note meant to be used this way. Is this safe? C) I'm not sure how to tell Autofac which container owns the resolved object. For many components I would like component to be "owned" by the scope in which it is constructed. Could tagged contexts help me here? Would I have to tag every level of the tree with a unique tag? This would be difficult because the tree depth is determined at runtime. Sorry for the extremely lengthy question. In summary: 1) Is there any way to do what I want to do using Autofac? 2) Is there another container more suited to this kind of dependency structure? 3) Is IOC the wrong tool for this altogether?

    Read the article

  • General workflow to allow multiple OpenIDs to be associated with one app account

    - by BobTodd
    I have a (typical?) scenario: that my app's users can use multiple openids mapped to one app account (like stackoverflow). For me the unique thing on the account is the email address, so this binds openids to the profile. Question is, how to allow a user to start using a second openid once one is setup. I am asking as I have read that it is a security hole to allow automatic account openid syncing simply based on the provider-supplied email address as someone could easily spoof someone's email address to create a spoof openid and falsely access the account (how I am not sure) - although this seems to be exactly how stack operates. See options a. and b. below. Problem for me with a. is what happens if the original openid no longer works for whatever reason - how would you set-up a new openid? Would b. be more acceptable if we used email verification? Does anyone have an article detailing a "standard" way (set of user stories) for this - it seems to be an increasingly popular way to authenticate. I have tried to detail this in a rough decision tree... 1. My Site > authentication landing page - user chooses an openid (facebook, google, myopenid etc), redirection > 2. Provider site returns with token (includes user registering a new openid, logging in or is already logged in to Provider site) 3. My Site > use token id to lookup user 3.1 Profile exists? Yes > authenticate. ends. No > 3.1.1 was email address supplied by provider? Yes > lookup user by email address 3.1.1.1 Profile exists? Yes > a. error message - please login with existing openid and associate this openid (from special page) Yes > b. or associate this openid with existing profile automatically. authenticate. ends. No > Register profile. With registration email address follow 3.1.1, except this time where email is unique, we will associate openid. ends

    Read the article

  • register_globals error in php

    - by user145862
    I was stuck up with the error directive 'register_globals' is no longer available in PHP in unknown on line 0 when tried to check the php version using "php -v" after enabling register_globals in php.ini file. I am not getting any php version info by doing so. Instead it throws the above mentioned error.After turning off this option, php info works quite well. It is very essential for me to have register_globals to be turned on.How can I have this corrected. my php.ini is as follows: ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; http://php.net/request-order request_order = "GP" ; Whether or not to register the EGPCS variables as global variables. You may ; want to turn this off if you don't want to clutter your scripts' global scope ; with user data. ; You should do your best to write your scripts so that they do not require ; register_globals to be on; Using form variables as globals can easily lead ; to possible security problems, if the code is not very well thought of. ; register_globals = On ; Determines whether the deprecated long $HTTP_*_VARS type predefined variables ; are registered by PHP or not. As they are deprecated, we obviously don't ; recommend you use them. They are on by default for compatibility reasons but ; they are not recommended on production servers. ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_long_arrays = Off ; This directive determines whether PHP registers $argv & $argc each time it ; runs. $argv contains an array of all the arguments passed to PHP when a script ; is invoked. $argc contains an integer representing the number of arguments ; that were passed when the script was invoked. These arrays are extremely ; useful when running scripts from the command line. When this directive is ; enabled, registering these variables consumes CPU cycles and memory each time ; a script is executed. For performance reasons, this feature should be disabled ; on production servers. ; Note: This directive is hardcoded to On for the CLI SAPI ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_argc_argv = Off ; When enabled, the SERVER and ENV variables are created when they're first ; used (Just In Time) instead of when the script starts. If these variables ; are not used within a script, having this directive on will result in a ; performance gain. The PHP directives register_globals, register_long_arrays, ; and register_argc_argv must be disabled for this directive to have any affect. ; auto_globals_jit = On

    Read the article

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • I need to understand why my server turned off

    - by Dema
    Our organization was robbed and definitely it was inside job. I was set up. I work as a manager and as system administrator in this organization and everything goes against me. The only clue I have is that someone accidentally or intentionally turned of a server that is in the office indicating that some one was inside at the time that no one should be. This is the only evidence I have that can justify me.  I looked the log files and they show that the Power button was pressed. Can you help me to find out that that was not a bug or systems overheat? I will post the log files and if you will ask more I will gladly provide the information. Messages: Dec 24 21:43:14 jamx shutdown[27883]: shutting down for system halt Dec 24 21:43:15 jamx init: Switching to runlevel: 0 Dec 24 21:43:15 jamx smartd[3047]: smartd received signal 15: Terminated Dec 24 21:43:15 jamx smartd[3047]: smartd is exiting (exit status 0) Dec 24 21:43:15 jamx avahi-daemon[3015]: Got SIGTERM, quitting. Dec 24 21:43:15 jamx avahi-daemon[3015]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::221:85ff:fe11:8221. Dec 24 21:43:15 jamx avahi-daemon[3015]: Leaving mDNS multicast group on interface eth0.IPv4 with address 82.207.41.239. Dec 24 21:43:15 jamx shutdown[27962]: shutting down for system halt Dec 24 21:43:15 jamx saslauthd[2983]: server_exit     : master exited: 2983 Dec 24 21:43:29 jamx nmbd[2921]: [2010/12/24 21:43:29, 0] nmbd/nmbd.c:terminate(58) Dec 24 21:43:29 jamx nmbd[2921]:   Got SIGTERM: going down... Dec 24 21:43:31 jamx clamd[2526]: Pid file removed. Dec 24 21:43:31 jamx clamd[2526]: --- Stopped at Fri Dec 24 21:43:31 2010 Dec 24 21:43:31 jamx clamd[2526]: Socket file removed. Dec 24 21:43:31 jamx mydns[2645]: jamx.org.ua up 9h44m48s (35088s) 117 questions (0/s) NOERROR=117 SERVFAIL=0 NXDOMAIN=0 NOTIMP=0 REFUSED=0 (100% TCP, 117 queries) Dec 24 21:43:31 jamx mydns[2645]: terminated Dec 24 21:43:34 jamx ntpd[2512]: ntpd exiting on signal 15 Dec 24 21:43:34 jamx hcid[2265]: Got disconnected from the system message bus Dec 24 21:43:35 jamx rpc.statd[2167]: Caught signal 15, un-registering and exiting. Dec 24 21:43:35 jamx portmap[28473]: connect from 127.0.0.1 to unset(status): request from unprivileged port Dec 24 21:43:35 jamx auditd[2021]: The audit daemon is exiting. Dec 24 21:43:35 jamx kernel: audit(1293219815.505:4044): audit_pid=0 old=2021 by auid=4294967295 Dec 24 21:43:35 jamx pcscd: pcscdaemon.c:572:signal_trap() Preparing for suicide Dec 24 21:43:36 jamx pcscd: hotplug_libusb.c:376:HPRescanUsbBus() Hotplug stopped Dec 24 21:43:36 jamx pcscd: readerfactory.c:1379:RFCleanupReaders() entering cleaning function Dec 24 21:43:36 jamx pcscd: pcscdaemon.c:532:at_exit() cleaning /var/run Dec 24 21:43:36 jamx kernel: Kernel logging (proc) stopped. Dec 24 21:43:36 jamx kernel: Kernel log daemon terminating. Dec 24 21:43:37 jamx exiting on signal 15 Acpid: [Fri Dec 24 21:43:14 2010] received event "button/power PWRF 00000080 00000001" [Fri Dec 24 21:43:14 2010] notifying client 2382[68:68] [Fri Dec 24 21:43:14 2010] executing action "/bin/ps awwux | /bin/grep gnome-power-manager | /bin/grep -qv grep || /sbin/shutdown -h now" [Fri Dec 24 21:43:14 2010] BEGIN HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] END HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] action exited with status 0 [Fri Dec 24 21:43:15 2010] completed event "button/power PWRF 00000080 00000001" [Fri Dec 24 21:43:15 2010] received event "button/power PWRF 00000080 00000002" [Fri Dec 24 21:43:15 2010] notifying client 2382[68:68] [Fri Dec 24 21:43:15 2010] executing action "/bin/ps awwux | /bin/grep gnome-power-manager | /bin/grep -qv grep || /sbin/shutdown -h now" [Fri Dec 24 21:43:15 2010] BEGIN HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] END HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] action exited with status 0 [Fri Dec 24 21:43:15 2010] completed event "button/power PWRF 00000080 00000002" [Fri Dec 24 21:43:34 2010] exiting

    Read the article

  • Oracle Virtual Server OEL vm fails to start - kernel panic on cpu identify

    - by Towndrunk
    I am in the process of following a guide to setup various oracle vm templates, so far I have installed OVS 2. 2 and got the OVM Manager working, imported the template for OEL5U5 and created a vm from it.. the problem comes when starting that vm. The log in the OVMM console shows the following; Update VM Status - Running Configure CPU Cap Set CPU Cap: failed:<Exception: failed:<Exception: ['xm', 'sched-credit', '-d', '32_EM11g_OVM', '-c', '0'] => Error: Domain '32_EM11g_OVM' does not exist. StackTrace: File "/opt/ovs-agent-2.3/OVSXXenVMConfig.py", line 2531, in xen_set_cpu_cap run_cmd(args=['xm', File "/opt/ovs-agent-2.3/OVSCommons.py", line 92, in run_cmd raise Exception('%s => %s' % (args, err)) The xend.log shows; [2012-11-12 16:42:01 7581] DEBUG (DevController:139) Waiting for devices vtpm [2012-11-12 16:42:01 7581] INFO (XendDomain:1180) Domain 32_EM11g_OVM (3) unpaused. [2012-11-12 16:42:03 7581] WARNING (XendDomainInfo:1907) Domain has crashed: name=32_EM11g_OVM id=3. [2012-11-12 16:42:03 7581] ERROR (XendDomainInfo:2041) VM 32_EM11g_OVM restarting too fast (Elapsed time: 11.377262 seconds). Refusing to restart to avoid loops .> [2012-11-12 16:42:03 7581] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=3 [2012-11-12 16:42:12 7581] DEBUG (XendDomainInfo:2230) Destroying device model [2012-11-12 16:42:12 7581] INFO (image:553) 32_EM11g_OVM device model terminated I have set_on_crash="preserve" in the vm.cfg and have then run xm create -c to get the console screen while booting and this is the log of what happens.. Started domain 32_EM11g_OVM (id=4) Bootdata ok (command line is ro root=LABEL=/ ) Linux version 2.6.18-194.0.0.0.3.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Mon Mar 29 18:27:00 EDT 2010 BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000180800000 (usable)> No mptable found. Built 1 zonelists. Total pages: 1574912 Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) Xen reported: 1600.008 MHz processor. Console: colour dummy device 80x25 Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) Software IO TLB disabled Memory: 6155256k/6299648k available (2514k kernel code, 135548k reserved, 1394k data, 184k init) Calibrating delay using timer specific routine.. 4006.42 BogoMIPS (lpj=8012858) Security Framework v1.0.0 initialized SELinux: Initializing. selinux_register_security: Registering secondary module capability Capability LSM initialized as secondary Mount-cache hash table entries: 256 CPU: L1 I Cache: 64K (64 bytes/line), D cache 16K (64 bytes/line) CPU: L2 Cache: 2048K (64 bytes/line) general protection fault: 0000 [1] SMP last sysfs file: CPU 0 Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.18-194.0.0.0.3.el5xen #1 RIP: e030:[ffffffff80271280] [ffffffff80271280] identify_cpu+0x210/0x494 RSP: e02b:ffffffff80643f70 EFLAGS: 00010212 RAX: 0040401000810008 RBX: 0000000000000000 RCX: 00000000c001001f RDX: 0000000000404010 RSI: 0000000000000001 RDI: 0000000000000005 RBP: ffffffff8063e980 R08: 0000000000000025 R09: ffff8800019d1000 R10: 0000000000000026 R11: ffff88000102c400 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffffffff805d2000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 Process swapper (pid: 0, threadinfo ffffffff80642000, task ffffffff804f4b80) Stack: 0000000000000000 ffffffff802d09bb ffffffff804f4b80 0000000000000000 0000000021100800 0000000000000000 0000000000000000 ffffffff8064cb00 0000000000000000 0000000000000000 Call Trace: [ffffffff802d09bb] kmem_cache_zalloc+0x62/0x80 [ffffffff8064cb00] start_kernel+0x210/0x224 [ffffffff8064c1e5] _sinittext+0x1e5/0x1eb Code: 0f 30 b8 73 00 00 00 f0 0f ab 45 08 e9 f0 00 00 00 48 89 ef RIP [ffffffff80271280] identify_cpu+0x210/0x494 RSP ffffffff80643f70 0 Kernel panic - not syncing: Fatal exception clear as mud to me. are there any other logs that will help me? I have now deployed another vm from the same template and used the default vm settings rather than adding more memory etc - I get exactly the same error.

    Read the article

  • An error occured synchronizing windows with time.windows.com

    - by Killrawr
    Okay so I've tried stopping/registering the win32tm service on this Windows Server 2008 Enterprise Computer. C:\Users\Administrator>net stop w32time The Windows Time service is stopping. The Windows Time service was stopped successfully. C:\Users\Administrator>w32tm /unregister The following error occurred: Access is denied. (0x80070005) C:\Users\Administrator>w32tm /unregister W32Time successfully unregistered. C:\Users\Administrator>w32tm /register W32Time successfully registered. C:\Users\Administrator>net start w32time The Windows Time service is starting. The Windows Time service was started successfully. (Source : http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/9bdfc2cc-4775-4435-8868-57d214e1e3ba/) And I get this error from the Date and Time, Internet Time tab (After also following the steps here). I've even tried the Atomic Time Clock Worldtimeserver and I get the error The following error occurred: The specified module could not be found. (0x8007007E). I've also disabled the Windows Firewall, that might of been blocking the synchronization. I've done a file scan with sfc /scannow that came back with no errors. C:\Users\Administrator>sfc /scannow Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection did not find any integrity violations. C:\Users\Administrator> But I'm not having much luck. Is there anyway lo possibly solve this? or is the time.windows.com servers unsupported? because the software is from 2008? (I really don't know :/), My ping result to time.windows.com C:\Users\Administrator>ping time.windows.com Pinging time.microsoft.akadns.net [65.55.21.22] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 65.55.21.22: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), And tracert result C:\Users\Administratortracert time.windows.com Tracing route to time.microsoft.akadns.net [65.55.21.24] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms 192.168.1.1 2 32 ms 31 ms 32 ms be2-100.bras1wtc.wlg.vf.net.nz [203.109.129.113] 3 31 ms 32 ms 31 ms be5-100.ppnzwtc01.wlg.vf.net.nz.129.109.203.in-a ddr.arpa [203.109.129.114] 4 31 ms 31 ms 31 ms gi0-2-0-3.ppnzwtc01.wlg.vf.net.nz.180.109.203.in -addr.arpa [203.109.180.210] 5 31 ms 31 ms 30 ms gi0-2-0-3.ppnzwtc02.wlg.vf.net.nz [203.109.180.2 09] 6 167 ms 166 ms 166 ms ip-141.199.31.114.VOCUS.net.au [114.31.199.141] 7 175 ms 175 ms 175 ms microsoft.com.any2ix.coresite.com [206.223.143.1 43] 8 177 ms 180 ms 176 ms xe-7-0-2-0.by2-96c-1a.ntwk.msn.net [207.46.42.17 6] 9 205 ms 205 ms 204 ms xe-10-0-2-0.co1-96c-1b.ntwk.msn.net [207.46.45.3 1] 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 ^C And nslookup C:\Users\Administrator>nslookup time.windows.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: time.microsoft.akadns.net Address: 65.55.21.22 Aliases: time.windows.com

    Read the article

  • Bridging a non-persistent PPP connection to wireless (or wired) in Windows XP

    - by phooze
    I have a 3G modem-like device (eMobile's D01NX, PC card style, for any Japan nerds out there) that I use to connect my PC to the Internet. I'd like to bridge this connection with another computer either via an ad-hoc wireless network, or a simple cross-over cable (either are options). However, when I open "Network Connections", I do not see the PPP connection (otherwise I could click both and bridge). I believe this is because there is software (provided by the vendor) that is handling the card directly and registering a PPP connection dynamically. When connected, an ipconfig at the command line yields: Ethernet adapter wireless: Connection-specific DNS Suffix . : Autoconfiguration IP Address. . . : 169.254.5.169 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Ethernet adapter lan: Media State . . . . . . . . . . . : Media disconnected PPP adapter {B59EEDDE-A22B-48DF-93E5-04842B641257}: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 114.xx.xxx.xx Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 114.xx.xxx.xx (I've commented out my IP address for privacy reasons, but what does appear there is a functional Internet IP address.) When I disconnect the adapter with the vendor software, the PPP connection disappears completely from the ipconfig list. Any ideas on how to do this?

    Read the article

  • How to use iptables to forward all data from an IP to a Virtual Machine

    - by jro
    OK, in an attempt to get some response, a TL;DR version. I know that the following command: iptables -A PREROUTING -t nat -i eth0 --dport 80 --source 1.1.1.1 -j REDIRECT --to-port 8080 ... will redirect all traffic from port 80 to port 8080. The problem is that I have to do this for every port that is to be redirected. To be future-proof, I want all ports for an IP to be redirected to a different (internal) IP, so that if one might decide to enable SSH, they can directly connect without worrying about iptables. What is needed to reliable forward all traffic from an external IP, to an internal IP, and vice versa? Extended version I've scoured the internet for this, but I never got a solid answer. What I have is one physical server (HOST), with several virtual machines (VM) that need traffic redirected to them. Just getting it to work with a single machine is enough for now. The VM's run under VirtualBox, and are set to use a host-only adapter (vboxnet0). Everything seems to work, but it is greatly lagging. Both the host (CentOS 5.6) and the guest (Ubuntu 10.04) machine are running Linux. What I did was the following: Configure the VM to have a static IP in the network of the vboxnet0 adapter. Add an IP alias to the host, registering to the dedicated (outside) IP. Setup iptables to allow traffic to come through (via sysctl). Configure iptables to DNAT and SNAT data from a given IP address to the internal address. iptables commands: sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -A POSTROUTING -t nat -j MASQUERADE iptables -t nat -I PREROUTING -d $OUT_IP -I eth0 -j DNAT --to-destination $IN_IP iptables -t nat -I POSTROUTING -s $IN_IP -o eth0 -j SNAT --to-source $OUT_IP Now the site works, but is really, really slow. I'm hoping I missed something simple, but I'm out of ideas for now. Some background info: before this, the site was working with basic port forwarding. E.g. port 80 was mapped to port 8080 using iptables. In VirtualBox (having the network adapter configured as NAT), a port forwarding the other way around made things work beautifully. The problem was twofold: first, multiple ports needed to be forwarded (for admin interfaces, https, ssh, etc). Second, it only allowed one IP address to use port 80. To resolve things, multiple external IP addresses are used for different (sub)domains. Likewise, the "VirtualBox" network will contain the virtual machines: DNS Ext. IP Adapter VM "VirtalBox" IP ------------------------------------------------------------------ a.example.com 1.1.1.1 eth0:1 vm_guest_1 192.168.56.1 b.example.com 2.2.2.2 eth0:2 vm_guest_2 192.168.56.2 c.example.com 3.3.3.3 eth0:3 vm_guest_3 192.168.56.3 And so on. Put simply, the goal is to channel all traffic from a.example.com to vm_guest_1 (of put differently, from 1.1.1.1 to 192.168.56.1). And achieve this with an acceptable speed :).

    Read the article

  • NLog Exception Details Renderer

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/nlog-exception-details-renderer.aspxI recently switch from Microsoft's Enterprise Library Logging block to NLog.  In my opinion, NLog offers a simpler and much cleaner configuration section with better use of placeholders, complemented by custom variables. Despite this, I found one deficiency in my migration; I had lost the ability to simply render all details of an exception into our logs and notification emails. This is easily remedied by implementing a custom layout renderer. Start by extending 'NLog.LayoutRenderers.LayoutRenderer' and overriding the 'Append' method. using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails";   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { // Todo: Append details to StringBuilder } }   Now that we have a base layout renderer, we simply need to add the formatting logic to add exception details as well as inner exception details. This is done using reflection with some simple filtering for the properties that are already being rendered. I have added an additional 'Register' method, allowing the definition to be registered in code, rather than in configuration files. This complements by 'LogWrapper' class which standardizes writing log entries throughout my applications. using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public sealed class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails"; private const string _Spacer = "======================================"; private List<string> _FilteredProperties;   private List<string> FilteredProperties { get { if (_FilteredProperties == null) { _FilteredProperties = new List<string> { "StackTrace", "HResult", "InnerException", "Data" }; }   return _FilteredProperties; } }   public bool LogNulls { get; set; }   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { Append(builder, logEvent.Exception, false); }   private void Append(StringBuilder builder, Exception exception, bool isInnerException) { if (exception == null) { return; }   builder.AppendLine();   var type = exception.GetType(); if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception Details:") .AppendLine(_Spacer) .Append("Exception Type: ") .AppendLine(type.ToString());   var bindingFlags = BindingFlags.Instance | BindingFlags.Public; var properties = type.GetProperties(bindingFlags); foreach (var property in properties) { var propertyName = property.Name; var isFiltered = FilteredProperties.Any(filter => String.Equals(propertyName, filter, StringComparison.InvariantCultureIgnoreCase)); if (isFiltered) { continue; }   var propertyValue = property.GetValue(exception, bindingFlags, null, null, null); if (propertyValue == null && !LogNulls) { continue; }   var valueText = propertyValue != null ? propertyValue.ToString() : "NULL"; builder.Append(propertyName) .Append(": ") .AppendLine(valueText); }   AppendStackTrace(builder, exception.StackTrace, isInnerException); Append(builder, exception.InnerException, true); }   private void AppendStackTrace(StringBuilder builder, string stackTrace, bool isInnerException) { if (String.IsNullOrEmpty(stackTrace)) { return; }   builder.AppendLine();   if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception StackTrace:") .AppendLine(_Spacer) .AppendLine(stackTrace); }   public static void Register() { Type definitionType; var layoutRenderers = ConfigurationItemFactory.Default.LayoutRenderers; if (layoutRenderers.TryGetDefinition(Name, out definitionType)) { return; }   layoutRenderers.RegisterDefinition(Name, typeof(ExceptionDetailsRenderer)); LogManager.ReconfigExistingLoggers(); } } For brevity I have removed the Trace, Debug, Warn, and Fatal methods. They are modelled after the Info methods. As mentioned above, note how the log wrapper automatically registers our custom layout renderer reducing the amount of application configuration required. using System; using NLog;   public static class LogWrapper { static LogWrapper() { ExceptionDetailsRenderer.Register(); }   #region Log Methods   public static void Info(object toLog) { Log(toLog, LogLevel.Info); }   public static void Info(string messageFormat, params object[] parameters) { Log(messageFormat, parameters, LogLevel.Info); }   public static void Error(object toLog) { Log(toLog, LogLevel.Error); }   public static void Error(string message, Exception exception) { Log(message, exception, LogLevel.Error); }   private static void Log(string messageFormat, object[] parameters, LogLevel logLevel) { string message = parameters.Length == 0 ? messageFormat : string.Format(messageFormat, parameters); Log(message, (Exception)null, logLevel); }   private static void Log(object toLog, LogLevel logLevel, LogType logType = LogType.General) { if (toLog == null) { throw new ArgumentNullException("toLog"); }   if (toLog is Exception) { var exception = toLog as Exception; Log(exception.Message, exception, logLevel, logType); } else { var message = toLog.ToString(); Log(message, null, logLevel, logType); } }   private static void Log(string message, Exception exception, LogLevel logLevel, LogType logType = LogType.General) { if (exception == null && String.IsNullOrEmpty(message)) { return; }   var logger = GetLogger(logType); // Note: Using the default constructor doesn't set the current date/time var logInfo = new LogEventInfo(logLevel, logger.Name, message); logInfo.Exception = exception; logger.Log(logInfo); }   private static Logger GetLogger(LogType logType) { var loggerName = logType.ToString(); return LogManager.GetLogger(loggerName); }   #endregion   #region LogType private enum LogType { General } #endregion } The following configuration is similar to what is provided for each of my applications. The 'application' variable is all that differentiates the various applications in all of my environments, the rest has been standardized. Depending on your needs to tweak this configuration while developing and debugging, this section could easily be pushed back into code similar to the registering of our custom layout renderer.   <?xml version="1.0"?>   <configuration> <configSections> <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/> </configSections> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <variable name="application" value="Example"/> <targets> <target type="EventLog" name="EventLog" source="${application}" log="${application}" layout="${message}${onexception: ${newline}${exceptiondetails}}"/> <target type="Mail" name="Email" smtpServer="smtp.example.local" from="[email protected]" to="[email protected]" subject="(${machinename}) ${application}: ${level}" body="Machine: ${machinename}${newline}Timestamp: ${longdate}${newline}Level: ${level}${newline}Message: ${message}${onexception: ${newline}${exceptiondetails}}"/> </targets> <rules> <logger name="*" minlevel="Debug" writeTo="EventLog" /> <logger name="*" minlevel="Error" writeTo="Email" /> </rules> </nlog> </configuration>   Now go forward, create your custom exceptions without concern for including their custom properties in your exception logs and notifications.

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Fan running continously on HP Pavillion G6 notebook with 12.04.1 LTS, help please?

    - by Ankit
    Fan is running continously on my HP Pavillion G6 notebook with 12.04.1 LTS. My system specifications are:- Ram: 6Gb Graphics Card:- 1 GB (AMD Raedon 64XX). HDD: 540 GB. Please find a list of ACPI errors logs from dmesg as follows:- buffer@ankit:~$ dmesg | grep ACPI -i [ 0.000000] BIOS-e820: 000000009cebf000 - 000000009cfbf000 (ACPI NVS) [ 0.000000] BIOS-e820: 000000009cfbf000 - 000000009cfff000 (ACPI data) [ 0.000000] ACPI: RSDP 00000000000fe020 00024 (v02 HPQOEM) [ 0.000000] ACPI: XSDT 000000009cffe120 00084 (v01 HPQOEM SLIC-MPC 00000001 01000013) [ 0.000000] ACPI: FACP 000000009cffc000 000F4 (v04 HPQOEM SLIC-MPC 00000001 MSFT 01000013) [ 0.000000] ACPI: DSDT 000000009cfec000 0C132 (v01 HP 1670 00000000 MSFT 01000013) [ 0.000000] ACPI: FACS 000000009cf6c000 00040 [ 0.000000] ACPI: ASF! 000000009cffd000 000A5 (v32 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: HPET 000000009cffb000 00038 (v01 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: APIC 000000009cffa000 0008C (v02 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: MCFG 000000009cff9000 0003C (v01 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: SLIC 000000009cfeb000 00176 (v01 HPQOEM SLIC-MPC 00000001 MSFT 01000013) [ 0.000000] ACPI: SSDT 000000009cfea000 00D52 (v01 HP 1670 00001000 MSFT 01000013) [ 0.000000] ACPI: BOOT 000000009cfe8000 00028 (v01 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: ASPT 000000009cfe5000 00034 (v07 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: SSDT 000000009cfe4000 00780 (v01 HP 1670 00003000 INTL 20100121) [ 0.000000] ACPI: SSDT 000000009cfe3000 00996 (v01 HP 1670 00003000 INTL 20100121) [ 0.000000] ACPI: SSDT 000000009cfdd000 0219F (v01 HP 1670 00001000 INTL 20100121) [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: PM-Timer IO Port: 0x408 [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x00] disabled) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: IRQ0 used by override. [ 0.000000] ACPI: IRQ2 used by override. [ 0.000000] ACPI: IRQ9 used by override. [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.005902] ACPI: Core revision 20110623 [ 0.536006] PM: Registering ACPI NVS region at 9cebf000 (1048576 bytes) [ 0.538423] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 0.538429] ACPI: bus type pci registered [ 0.656088] ACPI: Added _OSI(Module Device) [ 0.656094] ACPI: Added _OSI(Processor Device) [ 0.656098] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.656103] ACPI: Added _OSI(Processor Aggregator Device) [ 0.660335] ACPI: EC: Look up EC in DSDT [ 0.664416] ACPI: Executed 1 blocks of module-level executable AML code [ 0.728303] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 0.729536] ACPI: SSDT 000000009ce70798 00727 (v01 PmRef Cpu0Cst 00003001 INTL 20100121) [ 0.730622] ACPI: Dynamic OEM Table Load: [ 0.730630] ACPI: SSDT (null) 00727 (v01 PmRef Cpu0Cst 00003001 INTL 20100121) [ 0.760829] ACPI: SSDT 000000009ce71a98 00303 (v01 PmRef ApIst 00003000 INTL 20100121) [ 0.761992] ACPI: Dynamic OEM Table Load: [ 0.761998] ACPI: SSDT (null) 00303 (v01 PmRef ApIst 00003000 INTL 20100121) [ 0.792451] ACPI: SSDT 000000009ce6fd98 00119 (v01 PmRef ApCst 00003000 INTL 20100121) [ 0.793521] ACPI: Dynamic OEM Table Load: [ 0.793528] ACPI: SSDT (null) 00119 (v01 PmRef ApCst 00003000 INTL 20100121) [ 0.872981] ACPI: Interpreter enabled [ 0.872992] ACPI: (supports S0 S3 S4 S5) [ 0.873064] ACPI: Using IOAPIC for interrupt routing [ 0.882723] ACPI: EC: GPE = 0x16, I/O: command/status = 0x66, data = 0x62 [ 0.883072] ACPI: No dock devices found. [ 0.883084] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.883882] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) [ 0.924187] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] [ 0.924509] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP01._PRT] [ 0.924581] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP02._PRT] [ 0.924659] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP03._PRT] [ 0.924758] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEG0._PRT] [ 0.924973] pci0000:00: Requesting ACPI _OSC control (0x1d) [ 0.925064] pci0000:00: ACPI _OSC request failed (AE_ERROR), returned control mask: 0x1d [ 0.925069] ACPI _OSC control for PCIe not granted, disabling ASPM [ 0.930212] ACPI: PCI Interrupt Link [LNKA] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930327] ACPI: PCI Interrupt Link [LNKB] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930436] ACPI: PCI Interrupt Link [LNKC] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930547] ACPI: PCI Interrupt Link [LNKD] (IRQs 1 3 4 5 6 *10 11 12 14 15) [ 0.930655] ACPI: PCI Interrupt Link [LNKE] (IRQs 1 3 4 5 6 10 11 12 14 15) *0, disabled. [ 0.930764] ACPI: PCI Interrupt Link [LNKF] (IRQs 1 3 4 5 6 10 11 12 14 15) *0, disabled. [ 0.930873] ACPI: PCI Interrupt Link [LNKG] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930979] ACPI: PCI Interrupt Link [LNKH] (IRQs 1 3 4 5 6 10 11 12 14 15) *0, disabled. [ 0.932142] PCI: Using ACPI for IRQ routing [ 0.967119] pnp: PnP ACPI init [ 0.967151] ACPI: bus type pnp registered [ 0.968356] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) [ 0.968516] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active) [ 0.968586] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active) [ 0.968818] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active) [ 0.968915] pnp 00:04: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.969206] system 00:05: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.969293] pnp 00:06: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.969418] pnp 00:07: Plug and Play ACPI device, IDs PNP0303 (active) [ 0.969528] pnp 00:08: Plug and Play ACPI device, IDs SYN1e3f SYN1e00 SYN0002 PNP0f13 (active) [ 0.969969] system 00:09: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.970574] system 00:0a: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.970617] pnp: PnP ACPI: found 11 devices [ 0.970622] ACPI: ACPI bus type pnp unregistered [ 1.138064] ACPI: Deprecated procfs I/F for AC is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 1.138331] ACPI: AC Adapter [ACAD] (off-line) [ 1.139068] ACPI: Lid Switch [LID0] [ 1.139176] ACPI: Power Button [PWRB] [ 1.139286] ACPI: Power Button [PWRF] [ 1.144637] ACPI: Thermal Zone [TZ01] (0 C) [ 1.144677] ACPI: Deprecated procfs I/F for battery is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 1.144693] ACPI: Battery Slot [BAT0] (battery present) [ 1.206926] ACPI: Battery Slot [BAT0] (battery present) [ 13.176993] acpi device:1a: registered as cooling_device4 [ 13.179931] acpi device:1b: registered as cooling_device5 [ 13.180221] ACPI: Video Device [VGA] (multi-head: yes rom: no post: no) [ 13.219589] acpi device:20: registered as cooling_device6 [ 13.220851] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 1649.915134] i8042 aux 00:08: wake-up capability disabled by ACPI [ 1649.915147] i8042 kbd 00:07: wake-up capability enabled by ACPI [ 1650.931028] r8169 0000:03:00.0: wake-up capability enabled by ACPI [ 1650.954743] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI [ 1650.978733] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI [ 1651.010950] ACPI: Preparing to enter system sleep state S3 [ 1652.251505] ACPI: Low-level resume complete [ 1652.360953] ACPI: Waking up from system sleep state S3 [ 1652.427581] ehci_hcd 0000:00:1a.0: wake-up capability disabled by ACPI [ 1652.435579] ehci_hcd 0000:00:1d.0: wake-up capability disabled by ACPI [ 1652.437887] r8169 0000:03:00.0: wake-up capability disabled by ACPI [ 1652.506660] i8042 kbd 00:07: wake-up capability disabled by ACPI [ 1661.238234] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [ 1661.238253] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [ 1661.238268] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [ 3151.784288] i8042 aux 00:08: wake-up capability disabled by ACPI [ 3151.784301] i8042 kbd 00:07: wake-up capability enabled by ACPI [ 3152.797676] r8169 0000:03:00.0: wake-up capability enabled by ACPI [ 3152.821379] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI [ 3152.845367] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI [ 3152.877600] ACPI: Preparing to enter system sleep state S3 [ 3154.313213] ACPI: Low-level resume complete [ 3154.422297] ACPI: Waking up from system sleep state S3 [ 3154.489692] ehci_hcd 0000:00:1a.0: wake-up capability disabled by ACPI [ 3154.497667] ehci_hcd 0000:00:1d.0: wake-up capability disabled by ACPI [ 3154.505947] r8169 0000:03:00.0: wake-up capability disabled by ACPI [ 3154.568985] i8042 kbd 00:07: wake-up capability disabled by ACPI [ 3162.745149] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [ 3162.745168] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [ 3162.745183] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [ 6775.723501] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [ 6775.723519] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [ 6775.723535] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [10388.004760] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [10388.004778] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [10388.004801] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [10723.591930] i8042 aux 00:08: wake-up capability disabled by ACPI [10723.591942] i8042 kbd 00:07: wake-up capability enabled by ACPI [10724.607624] r8169 0000:03:00.0: wake-up capability enabled by ACPI [10724.631349] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI [10724.655339] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI [10724.687572] ACPI: Preparing to enter system sleep state S3 [10726.123176] ACPI: Low-level resume complete [10726.232181] ACPI: Waking up from system sleep state S3 [10726.303653] ehci_hcd 0000:00:1a.0: wake-up capability disabled by ACPI [10726.311648] ehci_hcd 0000:00:1d.0: wake-up capability disabled by ACPI [10726.315734] r8169 0000:03:00.0: wake-up capability disabled by ACPI [10726.379287] i8042 kbd 00:07: wake-up capability disabled by ACPI [10734.393523] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [10734.393542] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [10734.393557] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/ps Continuous sound from the fan is very annoying, any help would highly appreciated.

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • Using the BAM Interceptor with Continuation

    - by Charles Young
    Originally posted on: http://geekswithblogs.net/cyoung/archive/2014/06/02/using-the-bam-interceptor-with-continuation.aspxI’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself. The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points. Understanding Steps The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor. The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location. This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’. Instance and Fragment State Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions. When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined. Initialization Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points.  The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action. The Root of the Problem The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration. Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.    This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic. The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed. The Workaround Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like: // Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used // for each location. The following code extracts only those track points for a given step name (location). var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints                       where (string)tp.Location == bamStepName                       select tp; var bamActivityInterceptorConfig =     new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName); foreach (var trackPoint in trackPointGroup) {     switch (trackPoint.Type)     {         case TrackPointType.Start:             bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);             break; etc… I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28  | Next Page >