Search Results

Search found 47324 results on 1893 pages for 'end users'.

Page 234/1893 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Facebook connect displaying invite friends dialog and closing on completion

    - by Dougnukem
    I'm trying to create a Facebook Connect application that displays a friend invite dialog within the page using Facebook's Javascript API (through a FBMLPopupDialog). The trouble is to display a friend invite dialog you use a multi-friend form which requires an action="url" attribute that represents the URL to redirect your page to when the user completes or skips the form. The problem is that I want to just close the FBMLPopupDialog (the same behavior as if the user just hit the 'X' button on the popup dialog). The best I can do is redirect the user back to the page they were on basically a reload but they lose all AJAX/Flash application state. I'm wondering if any Facebook Connect developers have run into this issue and have a good way to simply display a friend invite "lightbox" dialog within their website where they don't want to "refresh" or "redirect" when the user finishes. The facebook connect JS API provides a FB.Connect.inviteConnectUsers, which provides a nice dialog but only connects existing users of your application who also have a Facebook account and haven't connected. http://bugs.developers.facebook.com/show%5Fbug.cgi?id=4916 function fb_inviteFriends() { //Invite users log("Inviting users..."); FB.Connect.requireSession( function() { //Connect succes var uid = FB.Facebook.apiClient.get_session().uid; log('FB CONNECT SUCCESS: ' + uid); //Invite users log("Inviting users..."); //Update server with connected account updateAccountFacebookUID(); var fbml = fb_getInviteFBML() ; var dialog = new FB.UI. FBMLPopupDialog("Weblings Invite", fbml) ; //dialog.setFBMLContent(fbml); dialog.setContentWidth(650); dialog.setContentHeight(450); dialog.show(); }, //Connect cancelled function() { //User cancelled the connect log("FB Connect cancelled:"); } ); } function fb_getInviteFBML() { var uid = FB.Facebook.apiClient.get_session().uid; var fbml = ""; fbml = '<fb:fbml>\n' + '<fb:request-form\n'+ //Redirect back to this page ' action="'+ document.location +'"\n'+ ' method="POST"\n'+ ' invite="true"\n'+ ' type="Weblings Invite"\n' + ' content="I need your help to discover all the Weblings and save the Internet! WebWars: Weblings is a cool new game where we can collect fantastic creatures while surfing our favorite websites. Come find the missing Weblings with me!'+ //Callback the server with the appropriate Webwars Account URL ' <fb:req-choice url=\''+ WebwarsFB.WebwarsAccountServer +'/SplashPage.aspx?action=ref&reftype=Facebook' label=\'Check out WebWars: Weblings\' />"\n'+ '>\n'+ ' <fb:multi-friend-selector\n'+ ' rows="2"\n'+ ' cols="4"\n'+ ' bypass="Cancel"\n'+ ' showborder="false"\n'+ ' actiontext="Use this form to invite your friends to connect with WebWars: Weblings."/>\n'+ ' </fb:request-form>'+ ' </fb:fbml>'; return fbml; }

    Read the article

  • Memory leak on CollectionView.View.Refresh

    - by Dabblernl
    I have defined my binding thus: <TreeView ItemsSource="{Binding UsersView.View}" ItemTemplate="{StaticResource MyDataTemplate}" /> The CollectionViewSource is defined thus: private ObservableCollection<UserData> users; public CollectionViewSource UsersView{get;set;} UsersView=new CollectionViewSource{Source=users}; UsersView.SortDescriptions.Add( new SortDescription("IsLoggedOn",ListSortDirection.Descending); UsersView.SortDescriptions.Add( new SortDescription("Username",ListSortDirection.Ascending); So far, so good, this works as expected: The view shows first the users that are logged on in alphabetical order, then the ones that are not. However, the IsLoggedIn property of the UserData is updated every few seconds by a backgroundworker thread and then the code calls: UsersView.View.Refresh(); on the UI thread. Again this works as expected: users that log on are moved from the bottom of the view to the top and vice versa. However: Every time I call the Refresh method on the view the application hoards 3,5MB of extra memory, which is only released after application shutdown (or after an OutOfMemoryException...) I did some research and below is a list of fixes that did NOT work: The UserData class implements INotifyPropertyChanged Changing the underlying users collection does not make any difference at all: any IENumerable<UserData as a source for the CollectionViewSource causes the problem. -Changing the ColletionViewSource to a List<UserData (and refreshing the binding) or inheriting from ObservableCollection to get access to the underlying Items collection to sort that in place does not work. I am out of ideas! Help? EDIT: I found it: The Resource MyDataTemplate contains a Label that is bound to a UserData object to show one of its properties, the UserData objects being handed down by the TreeView's ItemsSource. The Label has a ContextMenu defined thus: <ContextMenu Background="Transparent" Width="325" Opacity=".8" HasDropShadow="True"> <PrivateMessengerUI:MyUserData IsReadOnly="True" > <PrivateMessengerUI:MyUserData.DataContext> <Binding Path="."/> </PrivateMessengerUI:MyUserData.DataContext> </PrivateMessengerUI:MyUserData> </ContextMenu> The MyUserData object is a UserControl that shows All properties of the UserData object. In this way the user first only sees one piece of data of a user and on a right click sees all of it. When I remove the MyUserData UserControl from the DataTemplate the memory leak disappears! How can I still implement the behaviour as specified above?

    Read the article

  • Single Sign On for a Web App

    - by Jeremy Goodell
    I have been trying to understand how this problem is solved for over a month now. I really need to come up with a general approach that works -- I'm basically the only resource who can do it. I have a theory, but I'm just not sure it's the easiest (or correct) approach and I haven't been able to find any information to support my ideas. Here's the scenario: 1) You have a complex web application that offers secure content on a subscription basis. 2) Users are required to log in to your application with user name and password. 3) You sell to large corporations, which already have a corporate authentication technology (for example, Active Directory). 4) You would like to integrate with the corporate authentication mechanism to allow their users to log onto your Web App without having to enter their user name and password. Now, any solution you come up with will have to provide a mechanism for: adding new users removing users changing user information allowing users to log in Ideally, all these would happen "automagically" when the corporate customer made the corresponding changes to their own authentication. Now, I have a theory that the way to do this (at least for Active Directory) would be for me to write a client-side app that integrates with the customer's Active Directory to track the targeted changes, and then communicate those changes to my Web App. I think that if this communication were done via Web Services offered by my web app, then it would maintain an unhackable level of security, which would obviously be a requirement for these corporate customers. I've found some information about a Microsoft product called Active Directory Federation Service (ADFS) which may or may not be the right approach for me. It seems to be a bit bulky and have some requirements that might not work for all customers. For other existing ID scenarios (like Athens and Shibboleth), I don't think a client application is necessary. It's probably just a matter of tying into the existing ID services. I would appreciate any advice anyone has on anything I've mentioned here. In particular, if you can tell me if my theory is correct about providing a client-side app that communicates with server-side Web Services, or if I'm totally going in the wrong direction. Also, if you could point me at any web sites or articles that explain how to do this, I'd really appreciate it. My research has not turned up much so far. Finally, if you could let me know of any Web applications that currently offer this service (particularly as tied to a corporate Active Directory), I would be very grateful. I am wondering if other B2B Web app's like salesforce.com, or hoovers.com offer a similar service for their corporate customers. I hate being in the dark and would greatly appreciate any light you can shed ... Jeremy

    Read the article

  • copied the reachability-test from apple, but the linker gives a failure

    - by nico
    i have tried to use the reachability-project published by apple to detect a reachability in an own example. i copied the most initialization, but i get this failure in the linker: Ld build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test normal armv6 cd /Users/uid04100/Documents/TEST setenv IPHONEOS_DEPLOYMENT_TARGET 3.1.3 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk -L/Users/uid04100/Documents/TEST/build/Debug-iphoneos -F/Users/uid04100/Documents/TEST/build/Debug-iphoneos -filelist /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test.LinkFileList -dead_strip -miphoneos-version-min=3.1.3 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test Undefined symbols: "_SCNetworkReachabilitySetCallback", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithAddress", referenced from: +[Reachability reachabilityWithAddress:] in Reachability.o "_SCNetworkReachabilityScheduleWithRunLoop", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityGetFlags", referenced from: -[Reachability connectionRequired] in Reachability.o -[Reachability currentReachabilityStatus] in Reachability.o "_SCNetworkReachabilityUnscheduleFromRunLoop", referenced from: -[Reachability stopNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithName", referenced from: +[Reachability reachabilityWithHostName:] in Reachability.o ld: symbol(s) not found collect2: ld returned 1 exit status my delegate.h: import @class Reachability; @interface testAppDelegate : NSObject { UIWindow *window; UINavigationController *navigationController; Reachability* hostReach; Reachability* internetReach; Reachability* wifiReach; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UINavigationController *navigationController; @end my delegate.m: import "testAppDelegate.h" import "SecondViewController.h" import "Reachability.h" @implementation testAppDelegate @synthesize window; @synthesize navigationController; (void) updateInterfaceWithReachability: (Reachability*) curReach { if(curReach == hostReach) { BOOL connectionRequired= [curReach connectionRequired]; if(connectionRequired) { //in these brackets schould be some code with sense, if i´m getting it to run } else { } } if(curReach == internetReach) { } if(curReach == wifiReach) { } } //Called by Reachability whenever status changes. - (void) reachabilityChanged: (NSNotification* )note { Reachability* curReach = [note object]; NSParameterAssert([curReach isKindOfClass: [Reachability class]]); [self updateInterfaceWithReachability: curReach]; } (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after application launch // Observe the kNetworkReachabilityChangedNotification. When that notification is posted, the // method "reachabilityChanged" will be called. // [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(reachabilityChanged:) name: kReachabilityChangedNotification object: nil]; //Change the host name here to change the server your monitoring hostReach = [[Reachability reachabilityWithHostName: @"www.apple.com"] retain]; [hostReach startNotifer]; [self updateInterfaceWithReachability: hostReach]; internetReach = [[Reachability reachabilityForInternetConnection] retain]; [internetReach startNotifer]; [self updateInterfaceWithReachability: internetReach]; wifiReach = [[Reachability reachabilityForLocalWiFi] retain]; [wifiReach startNotifer]; [self updateInterfaceWithReachability:wifiReach]; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } (void)dealloc { [navigationController release]; [window release]; [super dealloc]; } @end

    Read the article

  • Custom property editors do not work for request parameters in Spring MVC?

    - by dvd
    Hello, I'm trying to create a multiaction web controller using Spring annotations. This controller will be responsible for adding and removing user profiles and preparing reference data for the jsp page. @Controller public class ManageProfilesController { @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UserAccount.class,"account", new UserAccountPropertyEditor(userManager)); binder.registerCustomEditor(Profile.class, "profile", new ProfilePropertyEditor(profileManager)); logger.info("Editors registered"); } @RequestMapping("remove") public void up( @RequestParam("account") UserAccount account, @RequestParam("profile") Profile profile) { ... } @RequestMapping("") public ModelAndView defaultView(@RequestParam("account") UserAccount account) { logger.info("Default view handling"); ModelAndView mav = new ModelAndView(); logger.info(account.getLogin()); mav.addObject("account", account); mav.addObject("profiles", profileManager.getProfiles()); mav.setViewName(view); return mav; } ... } Here is the part of my webContext.xml file: <context:component-scan base-package="ru.mirea.rea.webapp.controllers" /> <context:annotation-config/> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> ... /home/users/manageProfiles=users.manageProfilesController </value> </property> </bean> <bean id="users.manageProfilesController" class="ru.mirea.rea.webapp.controllers.users.ManageProfilesController"> <property name="view" value="home\users\manageProfiles"/> </bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" /> However, when i open the mapped url, i get exception: java.lang.IllegalArgumentException: Cannot convert value of type [java.lang.String] to required type [ru.mirea.rea.model.UserAccount]: no matching editors or conversion strategy found I use spring 2.5.6 and plan to move to the Spring 3.0 in some not very distant future. However, according to this JIRA https://jira.springsource.org/browse/SPR-4182 it should be possible already in spring 2.5.1. The debug shows that the InitBinder method is correctly called. What am i doing wrong?

    Read the article

  • PHP/Java Bridge java.lang.NoSuchMethodException

    - by m1sk
    I have setup PHP/Java Bridge with working examples in netbeans tomcat directory. What doesnt work is using custom JAR Here is my code: package com.micha; public class Hello1Bean { public Hello1Bean() {} String hi() {return "This is my hello message";} String hello(String name) {return "Hello" + name;} } And the php code <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> </head> <body> <?php require_once ("java/Java.inc"); $world = new Java("com.micha.Hello1Bean"); echo java_values($world->hi()); echo "Hello Working Thingy\n\n"; ?> </body> </html> When I check http://localhost:8084/JavaBridge/mytest.php: javax.servlet.ServletException: java.lang.RuntimeException: PHP Fatal error: Uncaught [[o:Exception]:"java.lang.Exception: Invoke failed: [[o:Hello1Bean]]->hi. Cause: java.lang.NoSuchMethodException: hi(). Candidates: [] VM: 1.6.0_25@http://java.sun.com/" at: #-6 php.java.bridge.JavaBridge.checkM(JavaBridge.java:1085) #-5 php.java.bridge.JavaBridge.Invoke(JavaBridge.java:1024) #-4 php.java.bridge.Request.handleRequest(Request.java:417) #-3 php.java.bridge.Request.handleRequests(Request.java:500) #-2 php.java.bridge.http.ContextRunner.run(ContextRunner.java:145) #-1 php.java.bridge.ThreadPool$Delegate.run(ThreadPool.java:60) #0 C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc(232): java_ThrowExceptionProxyFactory->getProxy(2, 'com.micha.Hello...', 'T', true) #1 C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc(360): java_Arg->getResult(true) #2 C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc(366): java_Client->getWrappedResult(true) #3 C:\Users\Micha\.netbean in C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc on line 195 php.java.servlet.fastcgi.FastCGIServlet.handle(FastCGIServlet.java:499) php.java.servlet.fastcgi.FastCGIServlet.doGet(FastCGIServlet.java:521) javax.servlet.http.HttpServlet.service(HttpServlet.java:621) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:393) php.java.servlet.PhpCGIFilter.doFilter(PhpCGIFilter.java:126) Before I had a ClassNotFoundException so it knows there is a a class but for some odd reason I cant call the function. ( if I replace "com.micha.Hello1Bean" with some class that doesnt exist then I get that exception)

    Read the article

  • SELECT SQL statement problem when getting info from an accdb in VB.Net

    - by Shane Fagan
    Hi again all, im getting the error below for this SQL statement in VB.Net SQLString = "SELECT AllPropertyDetails.PropertyID, Street, Town, County, Acres, Quotas, ResidenceDetails, Status, HighestBid, AskingPrice FROM AllPropertyDetails " SQLString += "INNER JOIN Land ON AllPropertyDetails.PropertyID = Land.PropertyID " SQLString += "WHERE Deleted = False " If PriceRadioButton.Checked = True Then SQLString += "ORDER BY AskingPrice ASC" ElseIf AcresRadioButton.Checked = True Then SQLString += "ORDER BY Acres ASC" End If Any ideas why its not working? The fields in the DB and the table names seem ok but its not working :/ System.InvalidOperationException was unhandled Message="An error occurred creating the form. See Exception.InnerException for details. The error is: No value given for one or more required parameters." Source="AuctioneerProject" StackTrace: at AuctioneerProject.My.MyProject.MyForms.Create__Instance__[T](T Instance) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 190 at AuctioneerProject.My.MyProject.MyForms.get_LandReport() at AuctioneerProject.ReportsMenu.LandButton_Click(Object sender, EventArgs e) in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\ReportsMenu.vb:line 4 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at AuctioneerProject.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 81 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Data.OleDb.OleDbException ErrorCode=-2147217904 Message="No value given for one or more required parameters." Source="Microsoft Office Access Database Engine" StackTrace: at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at AuctioneerProject.LandReport.load_Land() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 37 at AuctioneerProject.LandReport.PriceRadioButton_CheckedChanged(Object sender, EventArgs e) in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 79 at System.Windows.Forms.RadioButton.OnCheckedChanged(EventArgs e) at System.Windows.Forms.RadioButton.set_Checked(Boolean value) at AuctioneerProject.LandReport.InitializeComponent() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.designer.vb:line 40 at AuctioneerProject.LandReport..ctor() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 5 InnerException:

    Read the article

  • Am I using EJBs properly?

    - by kgrad
    I am using a JEE6 stack including JPA 2.0, JSF 2.0, EJB 3.1 etc. The way my architecture is setup is the following: I have JPA annotated DAOs using hibernate as my JPA provider. I have JSF Managed beans which correspond to my facelet/xhtml pages. I have EJBs that handle all of my database requests. My XHTML pages have JSF EL which make calls to my Managed beans. My managed beans contain references to my DAO entities which are managed by EJBs. For example, I have a user entity which is mapped to a db table. I have a user EJB which handles all CRUD operations that return Users. I have a page that edits a user. The highlevel workflow would be: navigate to user edit page - EL calls a method located in the managed bean that loads a user. The method calls userEJB.loadUser(user) from the EJB to get the user from the database. The user is edited and submit - a function is called in the managed bean which calls a function in the EJB to save the user. etc. I am running into issues accessing my data within my JSF pages using EJBs. I am having a lot of problems with lazy initialization errors, which I believe is due to how I have set things up. For example, I have a Client entity that has a List of users which are lazily loaded. In order to get a client I call a method in my EJB which goes to the database, finds a client and returns it. Later on i wish to access this clients list of users, in order to do so i have to go back to the EJB by calling some sort of method in order to load those users (since they are lazily loaded). This means that I have to create a method such as public List<User> getUserListByClient(Client c) { c = em.merge(c); return c.getUserList(); } The only purpose of this method is to load the users (and I'm not even positive this approach is good or works). If i was doing session management myself, I would like just leave the session open for the entire request and access the property directly, this would be fine as the session would be open anyway, there seems to be this one extra layer of indirection in EJBs which is making things difficult for me. I do like EJBs as I like the fact that they are controlled by the container, pooled, offer transaction management for free etc. However, I get the feeling that I am using them incorrectly, or I have set up my JSF app incorrectly. Any feedback would be greatly appreciated. thanks,

    Read the article

  • What is a good dumbed-down, safe template system for PHP?

    - by Wilhelm
    (Summary: My users need to be able to edit the structure of their dynamically generated web pages without being able to do any damage.) Greetings, ladies and gentlemen. I am currently working on a service where customers from a specific demographic can create a specific type of web site and fill it with their own content. The system is written in PHP. Many of the users of this system wish to edit how their particular web site looks, or, more commonly, have a designer do it for them. Editing the CSS is fine and dandy, but sometimes that's not enough. Sometimes they want to shuffle the entire page structure around by editing the raw HTML of the dynamically created web pages. The templating system used by WordPress is, as far as I can see, perfect for my use. Except for one thing which is critically important. In addition to being able to edit how comments are displayed or where the menu goes, someone editing a template can have that template execute arbitrary PHP code. As the same codebase runs all these different sites, with all content in the same databse, allowing my users to run arbitrary code is clearly out of the question. So what I need, is a dumbed-down, idiot-proof templating system where my users can edit most of the page structure on their own, pulling in the dynamic sections wherever, without being able to even echo 1+1;. Observe the following psuedocode: <!DOCTYPE html> <title><!-- $title --></title> <!-- header() --> <!-- menu() --> <div>Some random custom crap added by the user.</div> <!-- page_content() --> That's the degree of power I'd like to grant my users. They don't need to do their own loops or calculations or anything. Just include my variables and functions and leave the rest to me. I'm sure I'm not the only person on the planet that needs something like this. Do you know of any ready-made templating systems I could use? Thanks in advance for your reply.

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • "Error #1006: getAttributeByQName is not a function." Flex 2.0.1 hotfix 2

    - by Deveti Putnik
    Hi, guys! I am working on some old Flex project (Flex 2.0.1 hotfix 2) and I am rookie in Flex programming. So, I wrote code for accessing some ASP.NET web service: <?xml version="1.0" encoding="utf-8"?> [Bindable] public var users:ArrayOfUser; private function buttonClicked():void { mx.controls.Alert.show(dataService.wsdl); dataService.UserGetAll.send();/ } public function dataHandler(event:ResultEvent):void { Alert.show("alo"); var response:ResponseUsers = event.result as ResponseUsers; if (response.responseCode != ResponseCodes.SUCCESS) { mx.controls.Alert.show("Error: " + response.responseCode.toString()); return; } users = response.users; } ]]> <mx:Button label="Click me!" click="buttonClicked()"/> And this is what I get from debugger: WSDL loaded Invoking SOAP operation UserGetAll Encoding SOAP request envelope Encoding SOAP request body 'A97A2DC1-AEDA-C594-45D2-1BA2B0F3B223' producer sending message '10681130-43E7-3DA7-34DD-1BA2B85545E3' 'direct_http_channel' channel sending message: (mx.messaging.messages::SOAPMessage)#0 body = "<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SOAP-ENV:Body> <tns:UserGetAll xmlns:tns="http://tempuri.org/"/> </SOAP-ENV:Body> </SOAP-ENV:Envelope>" clientId = "DirectHTTPChannel0" contentType = "text/xml; charset=utf-8" destination = "DefaultHTTP" headers = (Object)#1 httpHeaders = (Object)#2 SOAPAction = ""http://tempuri.org/UserGetAll"" messageId = "10681130-43E7-3DA7-34DD-1BA2B85545E3" method = "POST" recordHeaders = false timestamp = 0 timeToLive = 0 url = "http://192.168.0.201:8123/Service.asmx" 'A97A2DC1-AEDA-C594-45D2-1BA2B0F3B223' producer acknowledge of '10681130-43E7-3DA7-34DD-1BA2B85545E3'. Decoding SOAP response Encoded SOAP response <?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><UserGetAllResponse xmlns="http://tempuri.org/"><UserGetAllResult><ResponseCode>Success</ResponseCode><Users><User><Id>1</Id><Name>test</Name><Key>testKey</Key><IsActive>true</IsActive><Name>Petar i Sofija</Name><Key>123789</Key><IsActive>true</IsActive></User></Users></UserGetAllResult></UserGetAllResponse></soap:Body></soap:Envelope> Decoding SOAP response envelope Decoding SOAP response body And finally I get this error "Error #1006: getAttributeByQName is not a function.". As you can see, I get correct response from web service, but dataHandler function is never got called. Can anyone please help me out? Thanks, Deveti Putnik

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • ASP.NET MVC & EF4 Entity Framework - Are there any performance concerns in using the entities vs retrieving only the fields i need?

    - by Ant
    Lets say we have 3 tables, Users, Products, Purchases. There is a view that needs to display the purchases made by a user. I could lookup the data required by doing: from p in DBSet<Purchases>.Include("User").Include("Product") select p; However, I am concern that this may have a performance impact because it will retrieve the full objects. Alternatively, I could select only the fields i need: from p in DBSet<Purchases>.Include("User").Include("Product") select new SimplePurchaseInfo() { UserName = p.User.name, Userid = p.User.Id, ProductName = p.Product.Name ... etc }; So my question is: Whats the best practice in doing this? == EDIT Thanks for all the replies. [QUESTION 1]: I want to know whether all views should work with flat ViewModels with very specific data for that view, or should the ViewModels contain the entity objects. Real example: User reviews Products var query = from dr in productRepository.FindAllReviews() where dr.User.UserId = 'userid' select dr; string sql = ((ObjectQuery)query).ToTraceString(); SELECT [Extent1].[ProductId] AS [ProductId], [Extent1].[Comment] AS [Comment], [Extent1].[CreatedTime] AS [CreatedTime], [Extent1].[Id] AS [Id], [Extent1].[Rating] AS [Rating], [Extent1].[UserId] AS [UserId], [Extent3].[CreatedTime] AS [CreatedTime1], [Extent3].[CreatorId] AS [CreatorId], [Extent3].[Description] AS [Description], [Extent3].[Id] AS [Id1], [Extent3].[Name] AS [Name], [Extent3].[Price] AS [Price], [Extent3].[Rating] AS [Rating1], [Extent3].[ShopId] AS [ShopId], [Extent3].[Thumbnail] AS [Thumbnail], [Extent3].[Creator_UserId] AS [Creator_UserId], [Extent4].[Comment] AS [Comment1], [Extent4].[DateCreated] AS [DateCreated], [Extent4].[DateLastActivity] AS [DateLastActivity], [Extent4].[DateLastLogin] AS [DateLastLogin], [Extent4].[DateLastPasswordChange] AS [DateLastPasswordChange], [Extent4].[Email] AS [Email], [Extent4].[Enabled] AS [Enabled], [Extent4].[PasswordHash] AS [PasswordHash], [Extent4].[PasswordSalt] AS [PasswordSalt], [Extent4].[ScreenName] AS [ScreenName], [Extent4].[Thumbnail] AS [Thumbnail1], [Extent4].[UserId] AS [UserId1], [Extent4].[UserName] AS [UserName] FROM [ProductReviews] AS [Extent1] INNER JOIN [Users] AS [Extent2] ON [Extent1].[UserId] = [Extent2].[UserId] LEFT OUTER JOIN [Products] AS [Extent3] ON [Extent1].[ProductId] = [Extent3].[Id] LEFT OUTER JOIN [Users] AS [Extent4] ON [Extent1].[UserId] = [Extent4].[UserId] WHERE N'615005822' = [Extent2].[UserId] or from d in productRepository.FindAllProducts() from dr in d.ProductReviews where dr.User.UserId == 'userid' orderby dr.CreatedTime select new ProductReviewInfo() { product = new SimpleProductInfo() { Id = d.Id, Name = d.Name, Thumbnail = d.Thumbnail, Rating = d.Rating }, Rating = dr.Rating, Comment = dr.Comment, UserId = dr.UserId, UserScreenName = dr.User.ScreenName, UserThumbnail = dr.User.Thumbnail, CreateTime = dr.CreatedTime }; SELECT [Extent1].[Id] AS [Id], [Extent1].[Name] AS [Name], [Extent1].[Thumbnail] AS [Thumbnail], [Extent1].[Rating] AS [Rating], [Extent2].[Rating] AS [Rating1], [Extent2].[Comment] AS [Comment], [Extent2].[UserId] AS [UserId], [Extent4].[ScreenName] AS [ScreenName], [Extent4].[Thumbnail] AS [Thumbnail1], [Extent2].[CreatedTime] AS [CreatedTime] FROM [Products] AS [Extent1] INNER JOIN [ProductReviews] AS [Extent2] ON [Extent1].[Id] = [Extent2].[ProductId] INNER JOIN [Users] AS [Extent3] ON [Extent2].[UserId] = [Extent3].[UserId] LEFT OUTER JOIN [Users] AS [Extent4] ON [Extent2].[UserId] = [Extent4].[UserId] WHERE N'userid' = [Extent3].[UserId] ORDER BY [Extent2].[CreatedTime] ASC [QUESTION 2]: Whats with the ugly outer joins?

    Read the article

  • Proper binding data to combobox and handling its events.

    - by Wodzu
    Hi guys. I have a table in SQL Server which looks like this: ID Code Name Surname 1 MS Mike Smith 2 JD John Doe 3 UP Unknown Person and so on... Now I want to bind the data from this table into the ComboBox in a way that in the ComboBox I have displayed value from the Code column. I am doing the binding in this way: SqlDataAdapter sqlAdapter = new SqlDataAdapter("SELECT * FROM dbo.Users ORDER BY Code", MainConnection); sqlAdapter.Fill(dsUsers, "Users"); cbxUsers.DataSource = dsUsers.Tables["Users"]; cmUsers = (CurrencyManager)cbxUsers.BindingContext[dsUsers.Tables["Users"]]; cbxUsers.DisplayMember = "Code"; And this code seems to work. I can scroll through the list of Codes. Also I can start to write code by hand and ComboBox will autocomplete the code for me. However, I wanted to put a label at the top of the combobox to display Name and Surname of the currently selected user code. My line of though was like that: "So, I need to find an event which will fire up after the change of code in combobox and in that event I will get the current DataRow..." I was browsing through the events of combobox, tried many of them but without a success. For example: private void cbxUsers_SelectionChangeCommitted(object sender, EventArgs e) { if (cmUsers != null) { DataRowView drvCurrentRowView = (DataRowView)cmUsers.Current; DataRow drCurrentRow = drvCurrentRowView.Row; lblNameSurname.Text = Convert.ToString(drCurrentRow["Name"]) + " " + Convert.ToString(drCurrentRow["Surname"]); } } This give me a strange results. Firstly when I scroll via mouse scroll it doesn't return me the row wich I am expecting to obtain. For example on JD it shows me "Mike Smith", on MS it shows me "John Doe" and on UP it shows me "Mike Smith" again! The other problem is that when I start to type in ComboBox and press enter it doesn't trigger the event. However, everything works as expected when I bind data to lblNameSurname.Text in this way: lblNameSurname.DataBindings.Add("Text", dsusers.Tables["Users"], "Name"); The problem here is that I can bind only one column and I want to have two. I don't want to use two labels for it (one to display name and other to display surname). So, what is the solution to my problem? Also, I have one question related to the data selection in ComboBox. Now, when I type something in the combobox it allows me to type letters that are not existing in the list. For example, I start to type "J" and instead of finishing with "D" so I would have "JD", I type "Jsomerandomtexthere". Combobox will allow that but such item does not exists on the list. In other words, I want combobox to prevent user from typing code which is not on the list of codes. Thanks in advance for your time.

    Read the article

  • Custom dynamic error pages in Ruby on Rails not working

    - by PlanetMaster
    Hi, I'm trying to implement custom dynamic error pages following this post: http://www.perfectline.co.uk/blog/custom-dynamic-error-pages-in-ruby-on-rails I did exactly what the blog post says. I included config.action_controller.consider_all_requests_local = false in my environment.rb. But is not working. My browser shows: Routing Error No route matches "/555" with {:method=>:get} So, it looks like the rescues are not fired. I get the following in my log file: ActionController::RoutingError (No route matches "/555" with {:method=>:get}): Rendering rescues/layout (not_found) Is there some routing interfering with the code? I'm not sure what to look for. I'm running rails 2.3.5. Here is the routes.rb file: ActionController::Routing::Routes.draw do |map| # routing van property-url map.connect 'buy/:property_type_plural/:province/:city/:address/:house_number', :controller => 'properties' , :action => 'show', :id => 'whatever' map.myimmonatie 'myimmonatie' , :controller => 'myimmonatie/properties', :action => 'index' map.login "login", :controller => "user_sessions", :action => "create", :conditions => {:method => :post} map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.buy "buy", :controller => 'buy' map.sell "sell", :controller => 'sell' map.home "home", :controller => 'home' map.disclaimer "disclaimer", :controller => 'disclaimer' map.sign_up "sign_up", :controller => 'users', :action => :new map.contact "contact", :controller => 'contact' map.resources :user_sessions map.resources :contact map.resources :password_resets map.resources :messages map.resources :users, :only => [:index,:new,:create,:activate,:edit,:profile,:password] map.resources :images map.resources :activation , :only => [:new,:resend] map.resources :email map.resources :properties, :except => [:index,:destroy] map.namespace :admin do |admin| admin.resources :users admin.resources :properties admin.resources :order_items, :as => :orders admin.resources :blog_posts, :as => :blog end map.connect 'myimmonatie/:action' , :controller => 'users', :id => 'current', :requirements => {:action => /(profile)|(password)|(email)/} map.namespace :myimmonatie do |myimmonatie| myimmonatie.resources :messages, :controller => 'messages' myimmonatie.resources :password, :as => "password", :controller => 'users', :action => 'password' myimmonatie.resources :properties , :controller => 'properties' myimmonatie.resources :orders , :only => [:index,:show,:create,:new] end map.root :controller => "home" map.connect ':controller/:action' map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end ActionController::Routing::Translator.translate_from_file('config','i18n-routes.yml')

    Read the article

  • Cannot execute "LOAD DATA LOCAL INFILE" Mysql query in Rails after a connection reconnection

    - by Ngan
    On Rails 2.3.8 (but I think Rails 3 might have this issue as well, not sure): I get an error when trying to execute a LOAD DATA LOCAL INFILE query after reconnecting to a database. I have a process that parses a file that can potentially take a bit of time. During the parsing, Mysql closes the connection due to timeout. This is fine, I do a ActiveRecord::Base.verify_active_connections! and I get the connection back (I do this in several places through my app). However, running a LOAD DATA LOCAL INFILE statement, I get this error: Mysql::Error: The used command is not allowed with this MySQL version It's not a permission issue, I know that for sure. Check out my test in console: ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:09:29 2011] (9990) SQL (1.7ms) LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users => nil > ActiveRecord::Base.connection.disconnect! => #<Mysql:0x104c6f890> > ActiveRecord::Base.verify_active_connections! [Sat Jan 08 00:09:58 2011] (9990) SQL (0.2ms) SET SQL_AUTO_IS_NULL=0 => {...connection stuff...} > ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:10:00 2011] (9990) SQL (0.0ms) Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users ActiveRecord::StatementInvalid: Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log' from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute' from (irb):6 I am able to do other queries like SELECT and whatnot, and I will get the correct result. It's just this one that giving me the error. I even tested this with a fresh rails app. You'll notice that I am able to do the exact same query before the disconnect. Thanks for the help!

    Read the article

  • Is it possible to create a service like Feed My Inbox on my own server?

    - by Mark Bowen
    I was just wondering if it's at all possible to create a service like Feed My Inbox on my own server using PHP? Basically I have a site which has RSS feeds which are dynamic in nature and can search from thousands of posts based on many different criteria. I have the RSS feed working fine and bringing back data dynamically for whatever criteria I want so that bits fine. I am using the ExpressionEngine CMS to handle the site and there will be thousands of users on the site (currently there are around 2,0000) but that number is exponentially growing every single day. What I want to be able to do is allow the users to choose from certain criteria which will then build a dynamic RSS URL which will then be stored in a database table (one row for each user). This bit I will be able to do myself but then I want to be able to send out new RSS feed items via e-mail to each user. This is the part I'm a little stuck on. I'm guessing I would somehow need to run a cron job to hit a page which would check each users RSS feed and then if there are new items to send them to the user via e-mail. That's where I am totally stuck though and I'm just wondering what the best way to go about it would be? That or any software in PHP that already does this sort of thing would be great. I tried out phpList but it has severe problems working with RSS and I only ever got it to work once and now never again and I've read that lots of people have had this same problem so unfortunately it's not just me :-( I know there are services such as Feed My Inbox which I could easily set up so that users click a link and their RSS feed URL is added to go and use that service but I want to keep users from seeing the dynamic nature of the feed or they will easily be able to modify it to get at other items in the feed. I need this so that I can charge for access to the feeds but if people can see the URL of the feed then I will be totally unstuck as they will be able to get at whatever they want very easily. Therefore I'd like to be able to send the items out to them. Would really love to hear if anyone knows if this kind of thing is possible at all and what would be involved?

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Approaches for Content-based Item Recommendations

    - by PartlyCloudy
    Hello, I'm currently developing an application where I want to group similar items. Items (like videos) can be created by users and also their attributes can be altered or extended later (like new tags). Instead of relying on users' preferences as most collaborative filtering mechanisms do, I want to compare item similarity based on the items' attributes (like similar length, similar colors, similar set of tags, etc.). The computation is necessary for two main purposes: Suggesting x similar items for a given item and for clustering into groups of similar items. My application so far is follows an asynchronous design and I want to decouple this clustering component as far as possible. The creation of new items or the addition of new attributes for an existing item will be advertised by publishing events the component can then consume. Computations can be provided best-effort and "snapshotted", which means that I'm okay with the best result possible at a given point in time, although result quality will eventually increase. So I am now searching for appropriate algorithms to compute both similar items and clusters. At important constraint is scalability. Initially the application has to handle a few thousand items, but later million items might be possible as well. Of course, computations will then be executed on additional nodes, but the algorithm itself should scale. It would also be nice if the algorithm supports some kind of incremental mode on partial changes of the data. My initial thought of comparing each item with each other and storing the numerical similarity sounds a little bit crude. Also, it requires n*(n-1)/2 entries for storing all similarities and any change or new item will eventually cause n similarity computations. Thanks in advance! UPDATE tl;dr To clarify what I want, here is my targeted scenario: User generate entries (think of documents) User edit entry meta data (think of tags) And here is what my system should provide: List of similar entries to a given item as recommendation Clusters of similar entries Both calculations should be based on: The meta data/attributes of entries (i.e. usage of similar tags) Thus, the distance of two entries using appropriate metrics NOT based on user votings, preferences or actions (unlike collaborative filtering). Although users may create entries and change attributes, the computation should only take into account the items and their attributes, and not the users associated with (just like a system where only items and no users exist). Ideally, the algorithm should support: permanent changes of attributes of an entry incrementally compute similar entries/clusters on changes scale something better than a simple distance table, if possible (because of the O(n²) space complexity)

    Read the article

  • GROUP BY and SUM distinct date across 2 tables

    - by kenitech
    I'm not sure if this is possible in one mysql query so I might just combine the results via php. I have 2 tables: 'users' and 'billing' I'm trying to group summed activity for every date that is available in these two tables. 'users' is not historical data but 'billing' contains a record for each transaction. In this example I am showing a user's status which I'd like to sum for created date and deposit amounts that I would also like to sum by created date. I realize there is a bit of a disconnect between the data but I'd like to some all of it together and display it as seen below. This will show me an overview of all of the users by when they were created and what the current statuses are next to total transactions. I've tried UNION as well as LEFT JOIN but I can't seem to get either to work. Union example is pretty close but doesn't combine the dates into one row. ( SELECT created, SUM(status) as totalActive, NULL as totalDeposit FROM users GROUP BY created ) UNION ( SELECT created, NULL as totalActive, SUM(transactionAmount) as totalDeposit FROM billing GROUP BY created ) I've also tried using a date lookup table and joining on the dates but the SUM values are being added multiple times. note: I don't care about the userIds at all but have it in here for the example. users table (where status of '1' denotes "active") (one record for each user) created | userId | status 2010-03-01 | 10 | 0 2010-03-01 | 11 | 1 2010-03-01 | 12 | 1 2010-03-10 | 13 | 0 2010-03-12 | 14 | 1 2010-03-12 | 15 | 1 2010-03-13 | 16 | 0 2010-03-15 | 17 | 1 billing table (record created for every instance of a billing "transaction" created | userId | transactionAmount 2010-03-01 | 10 | 50 2010-03-01 | 18 | 50 2010-03-01 | 19 | 100 2010-03-10 | 89 | 55 2010-03-15 | 16 | 50 2010-03-15 | 12 | 90 2010-03-22 | 99 | 150 desired result: created | sumStatusActive | sumStatusInactive | sumTransactions 2010-03-01 | 2 | 1 | 200 2010-03-10 | 0 | 1 | 55 2010-03-12 | 2 | 0 | 0 2010-03-13 | 0 | 0 | 0 2010-03-15 | 1 | 0 | 140 2010-03-22 | 0 | 0 | 150 Table dump: CREATE TABLE IF NOT EXISTS `users` ( `created` date NOT NULL, `userId` int(11) NOT NULL, `status` smallint(6) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `users` (`created`, `userId`, `status`) VALUES ('2010-03-01', 10, 0), ('2010-03-01', 11, 1), ('2010-03-01', 12, 1), ('2010-03-10', 13, 0), ('2010-03-12', 14, 1), ('2010-03-12', 15, 1), ('2010-03-13', 16, 0), ('2010-03-15', 17, 1); CREATE TABLE IF NOT EXISTS `billing` ( `created` date NOT NULL, `userId` int(11) NOT NULL, `transactionAmount` int(11) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `billing` (`created`, `userId`, `transactionAmount`) VALUES ('2010-03-01', 10, 50), ('2010-03-01', 18, 50), ('2010-03-01', 19, 100), ('2010-03-10', 89, 55), ('2010-03-15', 16, 50), ('2010-03-15', 12, 90), ('2010-03-22', 99, 150);

    Read the article

  • E-Business Suite Technology Sessions at OpenWorld 2012

    - by Max Arderius
    Oracle OpenWorld 2012 is almost here! We're looking forward to updating you on our products, strategy, and roadmaps. This year, the E-Business Suite Applications Technology Group (ATG) will participate in 25 speaker sessions, two Meet the Experts round-table discussions, five demoground booths and seven Special Interest Group meetings as guest speakers. We hope to see you at our sessions.  Please join us to hear the latest news and connect with senior ATG development staff. Here's a downloadable listing of all Applications Technology Group-related sessions with times and locations: FOCUS ON Oracle E-Business Suite - Applications Tools and Technology (PDF) General Sessions GEN8474 - Oracle E-Business Suite - Strategy, Update, and RoadmapCliff Godwin, SVP, Oracle Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone West 2002/2004 In this session, hear Oracle E-Business Suite General Manager Cliff Godwin deliver an update on the Oracle E-Business Suite product line. This session covers the value delivered by the current release of Oracle E-Business Suite, the momentum, and how Oracle E-Business Suite applications integrate into Oracle’s overall applications strategy. You’ll come away with an understanding of the value Oracle E-Business Suite applications deliver now and will deliver in the future. GEN9173 - Optimize and Extend Oracle Applications - The Path to Oracle Fusion ApplicationsNadia Bendjedou, Oracle; Corre Curtice, Bhavish Madurai (CSC) Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 3002/3004 One of the main objectives of this session is to help organizations build their IT roadmap for the next five years and be aligned with the Oracle Applications strategy in general and the Oracle Fusion Applications strategy in particular. Come hear about some of the common sense, practical steps you can take to optimize the performance of your Oracle Applications today and prepare your path to Oracle Fusion Applications for when your organization is ready to embrace them. Each step you take in adopting Oracle Fusion technology gets you partway to Oracle Fusion Applications. Conference Sessions CON9024 - Oracle E-Business Suite Technology: Latest Features and Roadmap Lisa Parekh, Oracle Monday, Oct 1, 10:45 AM - 11:45 AM - Moscone West 2016 This Oracle development session provides a comprehensive overview of Oracle’s product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for the Oracle E-Business Suite technology stack. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. CON9021 - Oracle E-Business Suite Future Directions: Deployment and System AdministrationMax Arderius, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2016  What’s coming in the next major version of Oracle E-Business Suite 12? This Oracle Development session covers the latest technology stack, including the use of Oracle WebLogic Server (Oracle Fusion Middleware 11g) and Oracle Database 11g Release 2 (11.2). Topics include an architectural overview of the latest updates, installation and upgrade options, new configuration options, and new tools for hot cloning and automated “lights-out” cloning. Come learn how online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature) will reduce your database patching downtimes to however long it takes to bounce your database server. CON9017 - Desktop Integration in Oracle E-Business Suite 12.1 Padmaprabodh Ambale, Gustavo Jimenez, Oracle Monday, Oct 1, 4:45 PM - 5:45 PM - Moscone West 2016 This presentation covers the latest functional enhancements in Oracle Web Applications Desktop Integrator and Oracle Report Manager, enhanced Microsoft Office support, and greater support for building custom desktop integration solutions. The session also presents tips and tricks for upgrading from Oracle Applications Desktop Integrator to Oracle Web Applications Desktop Integrator and Oracle Report Manager. CON9023 - Oracle E-Business Suite Technology Certification Primer and Roadmap Steven Chan, Oracle Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 2016  Is your Oracle E-Business Suite technology stack up to date? Are you taking advantage of all the latest options and capabilities? This Oracle development session summarizes the latest certifications and roadmap for the Oracle E-Business Suite technology stack, including elements such as database releases and options, Java, Oracle Forms, Oracle Containers for J2EE, desktop operating systems, browsers, JRE releases, development and Web authoring tools, user authentication and management, business intelligence, Oracle Application Management Packs, security options, clouds, Oracle VM, and virtualization. The session also covers the most commonly asked questions about tech stack component support dates and upgrade implications. CON9028 - Minimizing Oracle E-Business Suite Maintenance DowntimesSantiago Bastidas, Elke Phelps, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2016 This Oracle development session features a survey of the best techniques sysadmins can use to minimize patching downtimes. It starts with an architectural-level review of Oracle E-Business Suite fundamentals and then moves to a practical view of the various tools and approaches for downtimes. Topics include patching shortcuts, merging patches, distributing worker processes across multiple servers, running ADPatch in noninteractive mode, staged APPL_TOPs, shared file systems, deferring systemwide database tasks, avoiding resource bottlenecks, and more. An added bonus: hear about the upcoming Oracle E-Business Suite 12 online patching capabilities based on the groundbreaking Oracle Database 11g Release 2 Edition-Based Redefinition feature. CON9116 - Extending the Use of Oracle E-Business Suite with the Oracle Endeca PlatformOsama Elkady, Muhannad Obeidat, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2018 The Oracle Endeca platform includes a leading unstructured data correlation and analytics engine, together with a best-in class catalog search and guided navigation solution, to improve the productivity of all types of users in your enterprise. This development session focuses on the details behind the Oracle Endeca platform’s integration into Oracle E-Business Suite. It demonstrates how easily you can extend the use of the Oracle Endeca platform into other areas of Oracle E-Business Suite and how you can bring in your own data and build new Oracle Endeca applications for Oracle E-Business Suite. CON9005 - Oracle E-Business Suite Integration Best PracticesVeshaal Singh, Oracle, Jeffrey Hand, Zebra Technologies Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2018 Oracle is investing across applications and technologies to make the application integration experience easier for customers. Today Oracle has certified Oracle E-Business Suite on Oracle Fusion Middleware 11g and provides a comprehensive set of integration technologies. Learn about Oracle’s integration offering across data- and process-centric integrations. These technologies can be used to address various application integration challenges and styles. In this session, you will get an understanding of how, when, and where you can leverage Oracle’s integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio.  CON9026 - Latest Oracle E-Business Suite 12.1 User Interface and Usability EnhancementsPadmaprabodh Ambale, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2016 This Oracle development session details the latest UI enhancements to Oracle Application Framework in Oracle E-Business Suite 12.1. Developers will get a detailed look at new features to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and Web services. Topics include new rich UI capabilities such as new home page features, Navigator and Favorites pull-down menus, REST interface, embedded widgets for analytics content, Oracle Application Development Framework (Oracle ADF) task flows, third-party widgets, a look-ahead list of values, inline attachments, pop-ups, personalization and extensibility enhancements, business layer extensions, Oracle ADF integration, and mobile devices. CON8805 - Planning Your Oracle E-Business Suite Upgrade from 11i to Release 12.1 and BeyondAnne Carlson, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 3002/3004 Attend this session to hear the latest Oracle E-Business Suite 12.1 upgrade planning tips from Oracle’s support, consulting, development, and IT organizations. You’ll get specific cross-product advice on how to understand the factors that affect your project’s duration, decide on your project’s scope, develop a robust testing strategy, leverage Oracle Support resources, and more. In a nutshell, this session tells you things you need to know before embarking upon your Release 12.1 upgrade project. CON9053 - Advanced Management of Oracle E-Business Suite with Oracle Enterprise ManagerAngelo Rosado, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 2016 The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. Oracle Enterprise Manager is the only product on the market that is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including end user, midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility and diagnostic capability as well as administrator productivity. The purpose of this session is to highlight the key features and benefits of Oracle Enterprise Manager and Oracle Application Management Suite for Oracle E-Business Suite. CON8809 - Oracle E-Business Suite 12.1 Upgrade Best Practices: Technical InsightIsam Alyousfi, Udayan Parvate, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3011 This session is ideal for organizations thinking about upgrading to Oracle E-Business Suite 12.1. It covers the fundamentals of upgrading to Release 12.1, including the technology stack components and supported upgrade paths. Hear from Oracle Development about the set of best practices for patching in general and executing the Release 12.1 technical upgrade, with special considerations for minimizing your downtime. Also get to know about relatively recent upgrade resources. CON9032 - Upgrading Your Customizations of Oracle E-Business Suite 12.1Sara Woodhull, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2016 Have you personalized Oracle Forms or Oracle Application Framework screens in Oracle E-Business Suite? Have you used mod_plsql in Release 11i? Have you extended or customized your Release 11i environment with other tools? The technical options for upgrading these customizations as part of your Oracle E-Business Suite Release 12.1 upgrade can be bewildering. Come to this Oracle development session to learn about selecting the best upgrade approach for your existing customizations. The session will help you understand customization scenarios and use cases, tools, and technologies to ensure that your Oracle E-Business Suite Release 12.1 environment fits your users’ needs closely and that any future customizations will be easy to upgrade. CON9259 - Oracle E-Business Suite Internationalization and Multilingual FeaturesMaher Al-Nubani, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2018 Oracle E-Business Suite supports more countries, languages, and regions than ever. Come to this Oracle development session to get an overview of internationalization features and capabilities and see new Release 12 features such as calendar support for Hijra and Thai, new group separators, lightweight multilingual support (MLS) setup, new character sets such as AL32UTF, newly supported languages, Mac certifications, Oracle iSetup support for moving MLS setups, new file export options for Unicode, new MLS number spelling options, and more. CON7188 - Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA SuiteSrikant Subramaniam, Joe Huang, Veshaal Singh, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3001 Follow your mobile customers, employees, and partners with Oracle Fusion Middleware. See how native iPhone and iPad applications can easily be built for Oracle E-Business Suite with the new Oracle ADF Mobile and Oracle SOA Suite. Using Oracle ADF Mobile, developers can quickly develop native applications for Apple iOS and other mobile platforms. The Oracle SOA Suite/Oracle ADF Mobile combination can execute business transactions on Oracle E-Business Suite. This session includes a demo in which a mobile user approves a business transaction in Oracle E-Business Suite and a demo of the tools used to build a native on-device solution. These concepts for mobile applications also apply to other Oracle applications.CON9029 - Oracle E-Business Suite Directions: Slashing Downtimes with Online PatchingKevin Hudson, Oracle Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West 2016 Oracle E-Business Suite will soon include online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature), which will reduce your database patching downtimes to however long it takes to bounce your database server. This Oracle development session details how online patching works, with special attention to what’s happening at a database object level when database patches are applied to an Oracle E-Business Suite environment that’s still running. Come learn about the operational and system management implications for minimizing maintenance downtimes when applying database patches with this new technology and the related impact on customizations you might have built on top of Oracle E-Business Suite. CON8806 - Upgrading to Oracle E-Business Suite 12.1: Technical and Functional PanelAndrew Katz, Komori America Corporation; Sandra Vucinic, VLAD Group, Inc. ;Srini Chavali, Cummins Inc.; Amrita Mehrok, Nadia Bendjedou, Anne Carlson Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2018 In this panel discussion, Oracle experts, customers, and partners share their experiences in upgrading to the latest release of Oracle E-Business Suite, Release 12.1. The panelists cover aspects of a typical Release 12 upgrade, technical (upgrading the technical infrastructure) as well as functional (upgrading to the new financial infrastructure). Hear directly from the experts who either develop the product or support, implement, or upgrade it, and find out how to apply their lessons learned to your organization. CON9027 - Personalize and Extend Oracle E-Business Suite Applications with Rich MashupsGustavo Jimenez, Padmaprabodh Ambale, Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2016 This session covers the use of several Oracle Fusion Middleware technologies to personalize and extend your existing Oracle E-Business Suite applications. The Oracle Fusion Middleware technologies covered include Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, Oracle Endeca applications, and Oracle Business Intelligence Enterprise Edition with Oracle E-Business Suite Oracle Application Framework applications. CON9036 - Advanced Oracle E-Business Suite Architectures: Maximum Availability, Security, and MoreElke Phelps, Oracle Wednesday, Oct 3, 3:30 PM - 4:30 PM - Moscone West 2016 This session includes architecture diagrams and configuration instructions for building a maximum availability architecture (MAA) that will help you design a disaster recovery solution that fits the needs of your business. Database and application high-availability features it describes include Oracle Data Guard, Oracle Real Application Clusters (Oracle RAC), Oracle Active Data Guard, load-balancing Web and forms services, parallel concurrent processing, and the use of Oracle Exalogic and Oracle Exadata to provide a highly available environment. The session also covers the latest updates to systems management tools, AutoConfig, cloud computing, virtualization, and Oracle WebLogic Server and provides sneak previews of upcoming functionality. CON9047 - Efficiently Scaling Oracle E-Business Suite on Oracle Exadata and Oracle ExalogicIsam Alyousfi, Nishit Rao, Oracle Wednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West 2016 Oracle Exadata and Oracle Exalogic are designed from the ground up with optimizations in software and hardware to deliver superfast performance for mission-critical applications such as Oracle E-Business Suite. Oracle E-Business Suite applications run three to eight times as fast on the Oracle Exadata/Oracle Exalogic platform in standard benchmark tests. Besides performance, customers benefit from simplified support, enhanced manageability, and the ability to consolidate multiple Oracle E-Business Suite instances. Attend this session to understand best practices for Oracle E-Business Suite deployment on Oracle Exalogic and Oracle Exadata through customer case studies. Learn how adopting the Exa* platform increases efficiency, simplifies scaling, and boosts performance for peak loads. CON8716 - Web Services and SOA Integration Options for Oracle E-Business SuiteRekha Ayothi, Veshaal Singh, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2016 This Oracle development session provides a deep dive into a subset of the Web services and SOA-related integration options available to Oracle E-Business Suite systems integrators. It offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other Web services options for integrating Oracle E-Business Suite with other applications. Systems integrators and developers will get an overview of the latest integration capabilities and technologies available out of the box with Oracle E-Business Suite and possibly a sneak preview of upcoming functionality and features. CON9030 - Recommendations for Oracle E-Business Suite Performance TuningIsam Alyousfi, Samer Barakat, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2018 Need to squeeze more performance out of your existing servers? This packed Oracle development session summarizes practical tips and lessons learned from performance-tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Apps sysadmins will learn concrete tips and techniques for identifying and resolving performance bottlenecks on all layers, with special attention to application- and database-tier servers. Learn about tuning Oracle Forms, Oracle Concurrent Manager, Apache, and Oracle Discoverer. Track down memory leaks and other issues at the Java and JVM layers. The session also covers Oracle E-Business Suite product-level tuning, including Oracle Workflow, Oracle Order Management, Oracle Payroll, and other modules. CON3429 - Using Oracle ADF with Oracle E-Business Suite: The Full Integration ViewSiva Puthurkattil, Lake County; Juan Camilo Ruiz, Sara Woodhull, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 3003 Oracle E-Business Suite delivers functionality for handling the core business of your organization. However, user requirements and new technologies are driving an emerging need to implement new types of user interfaces for these applications. This session provides an overview of how to use Oracle Application Development Framework (Oracle ADF) to deliver cutting-edge Web 2.0 and mobile rich user interfaces that front existing Oracle E-Business Suite processes, and it also explores all the existing types of integration between the two worlds. CON9020 - Integrating Oracle E-Business Suite with Oracle Identity Management SolutionsSunil Ghosh, Elke Phelps, Oracle Thursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West 2016 Need to integrate Oracle E-Business Suite with Microsoft Windows Kerberos, Active Directory, CA Netegrity SiteMinder, or other third-party authentication systems? Want to understand your options when Oracle Premier Support for Oracle Single Sign-On ends in December 2011? This Oracle Development session covers the latest certified integrations with Oracle Access Manager 11g and Oracle Internet Directory 11g, which can be used individually or as bridges for integrating with third-party authentication solutions. The session presents an architectural overview of how Oracle Access Manager, its WebGate and AccessGate components, and Oracle Internet Directory work together, with implications for Oracle Discoverer, Oracle Portal, and other Oracle Fusion identity management products. CON9019 - Troubleshooting, Diagnosing, and Optimizing Oracle E-Business Suite TechnologyGustavo Jimenez, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2016 This session covers how you can proactively diagnose Oracle E-Business Suite applications, including extensions built with Oracle Fusion Middleware technologies such as Oracle Application Development Framework (Oracle ADF) and Oracle WebCenter to catch potential issues in the middle tier before they become more serious. Topics include debugging, logging infrastructure, warning signs, performance tuning, information required when logging service requests, general JVM optimization, and an overall picture of all the moving parts that make it possible for Oracle E-Business Suite to isolate and fix problems. Also learn how Oracle Diagnostics Framework will help prevent downtime caused by failures. CON9031 - The Top 10 Things You Can Do to Secure Your Oracle E-Business Suite InstanceEric Bing, Erik Graversen, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2018 Learn the top 10 things you can do to secure your applications and your sensitive data. This Oracle development session for system administrators and security professionals explores some of the most important and overlooked things you can do to secure your Oracle E-Business Suite instance. It also covers data masking and other mechanisms for protecting sensitive data. Special Interest Groups (SIG) Some of our most senior staff have been invited to participate on the following SIG meetings as guest speakers: SIG10525 - OAUG - Archive & Purge SIGBrian Bent - Pre-Sales Engineer, TierData, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3011 The Archive and Purge SIG is an organization in which users can share their experiences and solicit functional and technical advice on archiving and purging data in Oracle E-Business Suite. This session provides an opportunity for users to network and share best practices, tips, and tricks. Guest: Oracle E-Business Suite Database Performance, Archive & Purging - Q&A SessionIsam Alyousfi, Senior Director, Applications Performance, Oracle SIG10547 - OAUG - Oracle E-Business (EBS) Applications Technology SIGSrini Chavali - IT Director, Cummins Inc Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3018 The general purpose of the EBS Applications Technology SIG is to inform and educate its members about current and future components of the tech stack as they relate to Oracle E-Business Suite. Attend this meeting for networking and education and to share best practices. Guest: Oracle E-Business Suite Technology Certification Roadmap - Presentation and Q&ASteven Chan, Sr. Director, Applications Technology Group, Oracle SIG10559 - OAUG - User Management SIGSusan Behn - VP of Oracle Delivery, Infosemantics, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3024 The E-Business Suite User Management SIG focuses on the components of user management that enable Oracle E-Business Suite users to define administrative functions and manage users’ access to functions and data based on roles within an organization—rather than the user’s individual identity—which is referred to as role-based access control (RBAC). This meeting includes an introduction to Oracle User Management that covers the Oracle User Management building blocks and presents an example of creating a security policy.Guest: Security and User Management - Q&A SessionEric Bing, Sr. Director, EBS Security, OracleSara Woodhull, Principal Product Manager, Applications Technology Group, Oracle SIG10515 - OAUG – Upgrade SIGBarbara Matthews - Consultant, On Call DBASandra Vucinic, VLAD Group, Inc. Sunday, Sep 30, 12:00 PM - 2:00 PM - Moscone West 3009 This Upgrade SIG session starts with a business meeting and then features a Q&A panel discussion on Oracle E-Business Suite upgrade topics. The session• Reviews Upgrade SIG goals and objectives• Provides answers, during the Q&A session, to questions related to Oracle E-Business Suite upgrades• Shares “real world” experiences, tips, and techniques for Oracle E-Business Suite upgrades to Release 12.1. Guest: Oracle E-Business Suite Upgrade - Q&A SessionAnne Carlson - Sr. Director, Oracle E-Business Suite Product Strategy, OracleUdayan Parvate - Director, EBS Release Engineering, OracleSuzana Ferrari, Sr. Principal Consultant, OracleIsam Alyousfi, Sr. Director, Applications Performance, Oracle SIG10552 - OAUG - Oracle E-Business Suite SIGDonna Rosentrater - Manager, Global Sourcing & Procurement Systems, TJX Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3020 The E-Business Suite SIG, affiliated with OAUG, supports Oracle E-Business Suite users through networking, education, and sharing of best practices. This SIG meeting will feature a general discussion of Oracle E-Business Suite product strategies in Release 12 and migration to Oracle Fusion Applications. Guest: Oracle E-Business Suite - Q&A SessionJeanne Lowell, Vice President, EBS Product Strategy, OracleNadia Bendjedou, Sr. Director, Product Strategy, Oracle SIG10556 - OAUG - SysAdmin SIGRandy Giefer - Sr Systems and Security Architect, Solution Beacon, LLC Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3022 The SysAdmin SIG provides a forum in which OAUG members and participants can share updates, tips, and successful practices relating to system administration in an Oracle applications environment. The SysAdmin SIG strives to enable system administrators to become more effective and efficient in their jobs by providing them with access to people and information that can increase their system administration knowledge and experience. Attend this meeting to network, share best practices, and benefit from educational content. Guest: Oracle E-Business Suite 12.2 Online Patching- Presentation and Q&AKevin Hudson, Sr. Director, Applications Technology Group, Oracle SIG10553 - OAUG - Database SIGMichael Brown - Senior DBA, COLIBRI LTD LC Sunday, Sep 30, 2:00 PM - 3:15 PM - Moscone West 3020 The OAUG Database SIG provides an opportunity for applications database administrators to learn from and share their experiences with supporting the various Oracle applications environments. This session will include a brief business meeting followed by a short presentation. It will end with an open discussion among the attendees about items of interest to those present. Guest: Oracle E-Business Suite Database Performance - Presentation and Q&AIsam Alyousfi, Sr. Director, Applications Performance, Oracle Meet the Experts We're planning two round-table discussions where you can review your questions with senior E-Business Suite ATG staff: MTE9648 - Meet the Experts for Oracle E-Business Suite: Planning Your Upgrade Jeanne Lowell - VP, EBS Product Strategy, Oracle John Abraham - Sr. Principal Product Manager, Oracle Nadia Bendjedou - Sr. Director - Product Strategy, Oracle Anne Carlson - Sr. Director, Applications Technology Group, Oracle Udayan Parvate - Director, EBS Release Engineering, Oracle Isam Alyousfi, Sr. Director, Applications Performance, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite upgrade best practices. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. MTE9649 - Meet the Oracle E-Business Suite Tools and Technology Experts Lisa Parekh - Vice President, Technology Integration, Oracle Steven Chan - Sr. Director, Oracle Elke Phelps - Sr. Principal Product Manager, Applications Technology Group, Oracle Max Arderius - Manager, Applications Technology Group, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite technology. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. Demos We have five booths in the exhibition demogrounds this year, where you can try ATG technologies firsthand and get your questions answered. Please stop by and meet our staff at the following locations: Advanced Architecture and Technology Stack for Oracle E-Business Suite (W-067) New User Productivity Capabilities in Oracle E-Business Suite (W-065) End-to-End Management of Oracle E-Business Suite (W-063) Oracle E-Business Suite 12.1 Technical Upgrade Best Practices (W-066) SOA-Based Integration for Oracle E-Business Suite (W-064)

    Read the article

  • Issues with signal handling [closed]

    - by user34790
    I am trying to actually study the signal handling behavior in multiprocess system. I have a system where there are three signal generating processes generating signals of type SIGUSR1 and SIGUSR1. I have two handler processes that handle a particular type of signal. I have another monitoring process that also receives the signals and then does its work. I have a certain issue. Whenever my signal handling processes generate a signal of a particular type, it is sent to the process group so it is received by the signal handling processes as well as the monitoring processes. Whenever the signal handlers of monitoring and signal handling processes are called, I have printed to indicate the signal handling. I was expecting a uniform series of calls for the signal handlers of the monitoring and handling processes. However, looking at the output I could see like at the beginning the monitoring and signal handling processes's signal handlers are called uniformly. However, after I could see like signal handler processes handlers being called in a burst followed by the signal handler of monitoring process being called in a burst. Here is my code and output #include <iostream> #include <sys/types.h> #include <sys/wait.h> #include <sys/time.h> #include <signal.h> #include <cstdio> #include <stdlib.h> #include <sys/ipc.h> #include <sys/shm.h> #define NUM_SENDER_PROCESSES 3 #define NUM_HANDLER_PROCESSES 4 #define NUM_SIGNAL_REPORT 10 #define MAX_SIGNAL_COUNT 100000 using namespace std; volatile int *usrsig1_handler_count; volatile int *usrsig2_handler_count; volatile int *usrsig1_sender_count; volatile int *usrsig2_sender_count; volatile int *lock_1; volatile int *lock_2; volatile int *lock_3; volatile int *lock_4; volatile int *lock_5; volatile int *lock_6; //Used only by the monitoring process volatile int monitor_count; volatile int usrsig1_monitor_count; volatile int usrsig2_monitor_count; double time_1[NUM_SIGNAL_REPORT]; double time_2[NUM_SIGNAL_REPORT]; //Used only by the main process int total_signal_count; //For shared memory int shmid; const int shareSize = sizeof(int) * (10); double timestamp() { struct timeval tp; gettimeofday(&tp, NULL); return (double)tp.tv_sec + tp.tv_usec / 1000000.; } pid_t senders[NUM_SENDER_PROCESSES]; pid_t handlers[NUM_HANDLER_PROCESSES]; pid_t reporter; void signal_catcher_1(int); void signal_catcher_2(int); void signal_catcher_int(int); void signal_catcher_monitor(int); void signal_catcher_main(int); void terminate_processes() { //Kill the child processes int status; cout << "Time up terminating the child processes" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); //Wait for the child processes to finish for(int i=0; i<NUM_SENDER_PROCESSES; i++) { waitpid(senders[i], &status, 0); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { waitpid(handlers[i], &status, 0); } waitpid(reporter, &status, 0); } int main(int argc, char *argv[]) { if(argc != 2) { cout << "Required parameters missing. " << endl; cout << "Option 1 = 1 which means run for 30 seconds" << endl; cout << "Option 2 = 2 which means run until 100000 signals" << endl; exit(0); } int option = atoi(argv[1]); pid_t pid; if(option == 2) { if(signal(SIGUSR1, signal_catcher_main) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, signal_catcher_main) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } if(signal(SIGINT, signal_catcher_int) == SIG_ERR) { perror("3"); exit(1); } /////////////////////////////////////////////////////////////////////////////////////// ////////////////////// Initializing the shared memory ///////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////// cout << "Initializing the shared memory" << endl; if ((shmid=shmget(IPC_PRIVATE,shareSize,IPC_CREAT|0660))< 0) { perror("shmget fail"); exit(1); } usrsig1_handler_count = (int *) shmat(shmid, NULL, 0); usrsig2_handler_count = usrsig1_handler_count + 1; usrsig1_sender_count = usrsig2_handler_count + 1; usrsig2_sender_count = usrsig1_sender_count + 1; lock_1 = usrsig2_sender_count + 1; lock_2 = lock_1 + 1; lock_3 = lock_2 + 1; lock_4 = lock_3 + 1; lock_5 = lock_4 + 1; lock_6 = lock_5 + 1; //Initialize them to be zero *usrsig1_handler_count = 0; *usrsig2_handler_count = 0; *usrsig1_sender_count = 0; *usrsig2_sender_count = 0; *lock_1 = 0; *lock_2 = 0; *lock_3 = 0; *lock_4 = 0; *lock_5 = 0; *lock_6 = 0; cout << "End of initializing the shared memory" << endl; ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////// End of initializing the shared memory /////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////Registering the signal handlers/////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal handlers" << endl; for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { if((pid = fork()) == 0) { if(i%2 == 0) { struct sigaction action; action.sa_handler = signal_catcher_1; sigset_t block_mask; action.sa_flags = 0; sigaction(SIGUSR1,&action,NULL); if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1 ,SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } struct sigaction action; action.sa_handler = signal_catcher_2; action.sa_flags = 0; sigaction(SIGUSR2,&action,NULL); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { //cout << "Registerd the handler " << pid << endl; handlers[i] = pid; } } cout << "End of registering the signal handlers" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////End of registering the signal handlers ////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////Registering the monitoring process ////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the monitoring process" << endl; if((pid = fork()) == 0) { struct sigaction action; action.sa_handler = signal_catcher_monitor; sigemptyset(&action.sa_mask); sigset_t block_mask; sigemptyset(&block_mask); sigaddset(&block_mask,SIGUSR1); sigaddset(&block_mask,SIGUSR2); action.sa_flags = 0; action.sa_mask = block_mask; sigaction(SIGUSR1,&action,NULL); sigaction(SIGUSR2,&action,NULL); if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { cout << "Monitor's pid is " << pid << endl; reporter = pid; } cout << "End of registering the monitoring process" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////End of registering the monitoring process//////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Sleep to make sure that the monitor and handler processes are well initialized and ready to handle signals sleep(5); ////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////Registering the signal generators/////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal generators" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { if((pid = fork()) == 0) { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } srand(i); while(true) { int signal_id = rand()%2 + 1; if(signal_id == 1) { killpg(getpgid(getpid()), SIGUSR1); while(__sync_lock_test_and_set(lock_4,1) != 0) { } (*usrsig1_sender_count)++; *lock_4 = 0; } else { killpg(getpgid(getpid()), SIGUSR2); while(__sync_lock_test_and_set(lock_5,1) != 0) { } (*usrsig2_sender_count)++; *lock_5=0; } int r = rand()%10 + 1; double s = (double)r/100; sleep(s); } exit(0); } else { //cout << "Registered the sender " << pid << endl; senders[i] = pid; } } //cout << "End of registering the signal generators" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////End of registering the signal generators/////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Either sleep for 30 seconds and terminate the program or if the number of signals generated reaches 10000, terminate the program if(option = 1) { sleep(90); terminate_processes(); } else { while(true) { if(total_signal_count >= MAX_SIGNAL_COUNT) { terminate_processes(); } else { sleep(0.001); } } } } void signal_catcher_1(int the_sig) { while(__sync_lock_test_and_set(lock_1,1) != 0) { } (*usrsig1_handler_count) = (*usrsig1_handler_count) + 1; cout << "Signal Handler 1 " << *usrsig1_handler_count << endl; __sync_lock_release(lock_1); } void signal_catcher_2(int the_sig) { while(__sync_lock_test_and_set(lock_2,1) != 0) { } (*usrsig2_handler_count) = (*usrsig2_handler_count) + 1; __sync_lock_release(lock_2); } void signal_catcher_main(int the_sig) { while(__sync_lock_test_and_set(lock_6,1) != 0) { } total_signal_count++; *lock_6 = 0; } void signal_catcher_int(int the_sig) { for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); exit(3); } void signal_catcher_monitor(int the_sig) { cout << "Monitoring process " << *usrsig1_handler_count << endl; } Here is the initial segment of output Monitoring process 0 Monitoring process 0 Monitoring process 0 Monitoring process 0 Signal Handler 1 1 Monitoring process 2 Signal Handler 1 2 Signal Handler 1 3 Signal Handler 1 4 Monitoring process 4 Monitoring process Signal Handler 1 6 Signal Handler 1 7 Monitoring process 7 Monitoring process 8 Monitoring process 8 Signal Handler 1 9 Monitoring process 9 Monitoring process 9 Monitoring process 10 Signal Handler 1 11 Monitoring process 11 Monitoring process 12 Signal Handler 1 13 Signal Handler 1 14 Signal Handler 1 15 Signal Handler 1 16 Signal Handler 1 17 Signal Handler 1 18 Monitoring process 19 Signal Handler 1 20 Monitoring process 20 Signal Handler 1 21 Monitoring process 21 Monitoring process 21 Monitoring process 22 Monitoring process 22 Monitoring process 23 Signal Handler 1 24 Signal Handler 1 25 Monitoring process 25 Signal Handler 1 27 Signal Handler 1 28 Signal Handler 1 29 Here is the segment when the signal handler processes signal handlers are called in a burst Signal Handler 1 456 Signal Handler 1 457 Signal Handler 1 458 Signal Handler 1 459 Signal Handler 1 460 Signal Handler 1 461 Signal Handler 1 462 Signal Handler 1 463 Signal Handler 1 464 Signal Handler 1 465 Signal Handler 1 466 Signal Handler 1 467 Signal Handler 1 468 Signal Handler 1 469 Signal Handler 1 470 Signal Handler 1 471 Signal Handler 1 472 Signal Handler 1 473 Signal Handler 1 474 Signal Handler 1 475 Signal Handler 1 476 Signal Handler 1 477 Signal Handler 1 478 Signal Handler 1 479 Signal Handler 1 480 Signal Handler 1 481 Signal Handler 1 482 Signal Handler 1 483 Signal Handler 1 484 Signal Handler 1 485 Signal Handler 1 486 Signal Handler 1 487 Signal Handler 1 488 Signal Handler 1 489 Signal Handler 1 490 Signal Handler 1 491 Signal Handler 1 492 Signal Handler 1 493 Signal Handler 1 494 Signal Handler 1 495 Signal Handler 1 496 Signal Handler 1 497 Signal Handler 1 498 Signal Handler 1 499 Signal Handler 1 500 Signal Handler 1 501 Signal Handler 1 502 Signal Handler 1 503 Signal Handler 1 504 Signal Handler 1 505 Signal Handler 1 506 Here is the segment when the monitoring processes signal handlers are called in a burst Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Why isn't it uniform afterwards. Why are they called in a burst?

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Server 2003 R2 doesn't allow logon after a few days of uptime

    - by Bryan
    We have a server 2003 R2 standard (which I'll refer to as SRV01) that's knocking on a bit now, but it still acts as a file, print and SQL server on our company's network. SRV01 hosts user profiles, home directories and pretty much all our business data. Note our AD is currently at 2008 R2 level. This server is due to be upgraded in the next 12 months, but I've no budget to spend on it just yet. A bit of history of this server follows: When SRV01 was first commissioned, it acted as a domain controller (with the same 2003 R2 install it has today), paired with another server that ran Server 2003 R2 SBS. A few years ago, we purchased a pair of dedicated DCs (2008 R2) and at this point we decommissioned the 2003 SBS server, and SRV01 was DCPROMOed out of the AD. Up until very recently, SRV01 used to run Exchange 2003, however we've recently purchased a dedicated server for Exchange 2010 and upgraded (following Microsoft recommended upgrade path). Exchange 2003 was recently uninstalled. - Cleanly to the best of my knowledge. Ever since Exchange was removed from SRV01, I'm finding that after a few days of uptime, when I attempt to logon, pressing CTRL-ALT-DEL just hides the Welcome to Windows Server 2003 banner, and never presents the logon dialog. All I see is a moveable mouse pointer and a blank background. It's a similar story with an admin TS session, the RDP client connects and gives me a blank background, but no logon dialog is presented. The RDP session indefinitely hangs until I give up and close it. The only way I've been able to gain access to the server is to pull the plug on it. Whilst the server does have a battery backed up RAID 5 controller, I'm unhappy about having to do this, so as a temporary measure, I've created a scheduled job to reboot SRV01 each night. Not only do I not like the idea of scheduling a reboot of a server like this, but it is also causing problems for users that leave desktop PCs left logged on overnight. Users complain of 'Delayed Write Failures', and there has also been a number of users that have started to complain about account lockout problems, as well as users not able to connect to shares on SRV01 until they reboot their desktop PCs. I've examined event logs on SRV01 and on the DCs looking for clues as to what the problem is, but there really is nothing untoward being logged. How could I being to investigate this problem when nothing of any relevance is being logged? Is there some additional logging that can be enabled that might give some clues as to what could be causing this problem? Could performance monitor help me out here, and if so, what counters would you consider monitoring? It's worth mentioning that whilst the server is unresponsive via the console and TS, it does still respond to clients connecting to shares without problems for several days, but after about a week I then start to hear users reporting problems accessing shares, but this seems quite sporadic. I've also tried leaving the console logged on (and locked), when I notice I can no longer logon via TS, I can unlock the server console without problem, but it refuses to reboot/shutdown, and subsequent attempts to reboot report that a system shutdown is already in progress and the system then completely hangs. I've tried playing the waiting game for several hours thinking that a timeout might allow the shutdown to continue, but to no avail.

    Read the article

  • GRUB doesn't recognize partitions on one harddisk

    - by knizz
    I have a dualboot computer with Windows Vista (on hd0) and Ubuntu 9.10. The bootloader is GRUB and the windows bootloader lets me decide between Vista and Ubuntu-Installation (broken WuBi). But now (i don't know why that changed) I can't use start the windows-bootloader anymore. I tried "ls" on the grub-prompt and it gave me a list like: (hd0) (hd1) (hd1,0) (hd1,1) (hd1,2) ... (fd0) It recognizes all partitions of hd1 (the ubuntu-harddisk) but not of hd0(the win-disk). .. WHY? Here is the result of the "boot info script" for the technical details: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks for (UUID=a7c510e3-2399-437b-ab92-fa609e48d63f)/boot/grub. => No boot loader is installed in the MBR of /dev/sdb sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows Vista Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe /wubildr.mbr /wubildr sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb1: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unbekannter Dateisystemtyp „“ sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: Bios Boot Partition Boot sector type: - Boot sector info: sdb4: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 9.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb5: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Platte /dev/sda: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x52554d66 Partition Boot Start End Size Id System /dev/sda1 * 2,048 307,202,047 307,200,000 7 HPFS/NTFS /dev/sda2 307,202,048 1,250,258,943 943,056,896 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Platte /dev/sdb: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x00000000 Partition Boot Start End Size Id System /dev/sdb1 1 1,250,263,727 1,250,263,727 ee GPT GUID Partition Table detected. Partition Start End Size System /dev/sdb1 34 262,177 262,144 Microsoft Windows /dev/sdb2 262,178 1,131,253,933 1,130,991,756 Linux or Data /dev/sdb3 1,131,253,934 1,131,255,887 1,954 Bios Boot Partition /dev/sdb4 1,131,255,888 1,245,312,528 114,056,641 Linux or Data /dev/sdb5 1,245,312,529 1,250,263,694 4,951,166 Linux Swap blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 AE1440441440122F ntfs /dev/sda2 3AE66E4DE66E0A09 ntfs data /dev/sdb2 5419D16119DAA4DE ntfs LaufwerkD /dev/sdb4 a7c510e3-2399-437b-ab92-fa609e48d63f ext4 /dev/sdb5 60a0143a-e01b-450a-bbd1-f22059e47b65 swap ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb4 / ext4 (rw,errors=remount-ro) =========================== sdb4/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s /boot/grub/grubenv ]; then have_grubenv=true load_env fi set default="0" if [ ${prev_saved_entry} ]; then saved_entry=${prev_saved_entry} save_env saved_entry prev_saved_entry= save_env prev_saved_entry fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/white ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry "Ubuntu, Linux 2.6.31-20-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-20-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-14-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-14-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" { insmod ntfs set root=(hd0,1) search --no-floppy --fs-uuid --set ae1440441440122f chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### =============================== sdb4/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 # / was on /dev/sdb4 during installation UUID=a7c510e3-2399-437b-ab92-fa609e48d63f / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb5 during installation UUID=60a0143a-e01b-450a-bbd1-f22059e47b65 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 =================== sdb4: Location of files loaded by Grub: =================== 583.8GB: boot/grub/core.img 583.8GB: boot/grub/grub.cfg 579.7GB: boot/initrd.img-2.6.31-14-generic 580.0GB: boot/initrd.img-2.6.31-20-generic 579.7GB: boot/vmlinuz-2.6.31-14-generic 579.8GB: boot/vmlinuz-2.6.31-20-generic 580.0GB: initrd.img 579.7GB: initrd.img.old 579.8GB: vmlinuz 579.7GB: vmlinuz.old =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on sdb1 00000000 54 34 dc 3b 8b ff 6c fa 3e 59 3d 24 25 af 5f 9b |T4.;..l.>Y=$%._.| 00000010 72 f8 36 3d 56 30 22 fd c6 08 5e 39 7f dc 29 48 |r.6=V0"...^9..)H| 00000020 48 e5 24 52 77 b0 fc 64 b6 ce 48 c3 07 ce b5 81 |H.$Rw..d..H.....| 00000030 06 68 60 4f 6e fb 83 92 df 3a 54 b9 df 21 2a cd |.h`On....:T..!*.| 00000040 1e 2f e2 49 fe cf 81 2d 52 17 1a 4e 66 b4 f3 f0 |./.I...-R..Nf...| 00000050 41 25 e3 96 26 28 fe 19 61 72 75 f8 40 a3 b7 ef |A%..&(..aru.@...| 00000060 5f 79 dc cb 28 44 44 7c 9b 9a 7b 6c 4b 4b 60 0f |_y..(DD|..{lKK`.| 00000070 a9 97 87 bc 85 9f db bb d2 1a 88 9f aa 49 18 d5 |.............I..| 00000080 92 2d db 7e fe f7 8d 7a 18 c0 33 c5 bd 7a 46 07 |.-.~...z..3..zF.| 00000090 c8 27 13 66 94 49 62 9f bc 99 56 55 25 fb 94 a9 |.'.f.Ib...VU%...| 000000a0 3f b2 a7 0a 87 d0 a4 4e 51 f1 09 02 c4 29 eb ff |?......NQ....)..| 000000b0 26 3b 51 3e 5a 0c db ee a6 57 a7 c3 ba a1 74 90 |&;Q>Z....W....t.| 000000c0 ee 70 08 18 cc b8 d0 22 ce 96 c7 cb 68 40 98 20 |.p....."....h@. | 000000d0 49 3d 07 ec df d1 8d cf 19 bc 42 90 70 24 01 b4 |I=........B.p$..| 000000e0 28 cf c6 50 d3 95 5a 1b 18 15 33 c7 b2 a8 95 92 |(..P..Z...3.....| 000000f0 bb 93 fe 18 2b 81 c1 6b 9c 30 f1 65 50 6a 80 3d |....+..k.0.ePj.=| 00000100 74 37 a8 59 a6 51 8a 63 b6 d8 16 9f a9 47 2a 7c |t7.Y.Q.c.....G*|| 00000110 04 a7 fe 69 47 02 bf e9 b7 1b 7a ea 60 5c 3c 53 |...iG.....z.`\<S| 00000120 5b 10 78 dc 4d d2 a8 22 30 45 37 fb 56 06 9f 06 |[.x.M.."0E7.V...| 00000130 aa df cf 87 3a 3e cf 72 f2 e5 a6 c6 aa e2 7c 1c |....:>.r......|.| 00000140 64 c2 fc 80 ce 02 fc 7f 0f c6 60 81 bf cd 3b 5a |d.........`...;Z| 00000150 37 a5 38 1b 0c 1b 39 2e d6 f6 3d a2 36 e5 87 c3 |7.8...9...=.6...| 00000160 17 b5 fd ee 33 c7 ce a3 d9 c2 57 dc ee 85 48 9d |....3.....W...H.| 00000170 33 60 02 cd c5 83 44 44 ea b6 07 25 0a 4b a6 6e |3`....DD...%.K.n| 00000180 fc 51 42 cd 84 0b 65 b6 19 a1 e5 b2 eb 14 0c fa |.QB...e.........| 00000190 24 77 f5 44 6e 5d 39 dd b6 8e cc f8 30 fe 21 46 |$w.Dn]9.....0.!F| 000001a0 9c ff 95 c6 c7 b5 0a df 54 ca d2 ac bc 64 d0 97 |........T....d..| 000001b0 94 54 d9 29 0f 91 60 20 c3 e4 53 c2 b0 e4 40 72 |.T.)..` ..S...@r| 000001c0 7e 25 bc 81 06 ad 05 46 14 a7 e6 71 6b 5c db 9c |~%.....F...qk\..| 000001d0 0a 5e 76 23 ae 06 01 36 98 21 65 2c 90 e7 4b 1a |.^v#...6.!e,..K.| 000001e0 2a 2d 80 a5 48 db 9e 14 e0 9f e9 aa 00 e3 77 32 |*-..H.........w2| 000001f0 0f fd 94 db 55 a6 64 46 be ae ca de da ee 89 68 |....U.dF.......h| 00000200 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >