Search Results

Search found 41235 results on 1650 pages for 'source control bindings'.

Page 351/1650 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to specify this 'symbolic link' for the Jungo WinDriver?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual : 4.2.3. Linux WinDriver Installation Instructions 4.2.3.1. Preparing the System for Installation In Linux, kernel modules must be compiled with the same header files that the kernel itself was compiled with. Since WinDriver installs kernel modules, it must compile with the header files of the Linux kernel during the installation process. Therefore, before you install WinDriver for Linux, verify that the Linux source code and the file version.h are installed on your machine: Install the Linux kernel source code: If you have yet to install Linux, install it, including the kernel source code, by following the instructions for your Linux distribution. If Linux is already installed on your machine, check whether the Linux source code was installed. You can do this by looking for 'linux' in the /usr/src directory. If the source code is not installed, either install it, or reinstall Linux with the source code, by following the instructions for your Linux distribution. Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: cd /usr/src/linux Type: make xconfig Save the configuration by choosing Save and Exit. Type: make dep Exit super user mode: exit To run GUI WinDriver applications (e.g., DriverWizard [5]; Debug Monitor [7.2]) you must also have version 5.0 of the libstdc++ library — libstdc++.so.5. If you do not have this file, install it from the relevant RPM in your Linux distribution (e.g., compat-libstdc++). Before proceeding with the installation, you must also make sure that you have a linux symbolic link. If you do not, create one by typing /usr/src$ ln -s 'target kernel'/linux For example, for the Linux 2.4 kernel type /usr/src$ ln -s linux-2.4/ linux ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I can't understand how to specify these two parameters in my Ubuntu .

    Read the article

  • SSH stops at "using username" with IPTables in effect

    - by Rautamiekka
    We used UFW but couldn't make the Source Dedicated ports open, which was weird, so we purged UFW and switched to IPTables, using Webmin to configure. If the inbound chain is on DENY and SSH port open [judged from Webmin], PuTTY will say using username "root" and stops at that instead of asking for public key pw. Inbound chain on ACCEPT the pw is asked. This problem didn't happen with UFW. Picture of IPTables configuration in Webmin: http://s284544448.onlinehome.us/public/PlusLINE%20Dedicated%20Server,%20Webmin,%20IPTables,%200.jpgThe address is to the previous rautamiekka.org. iptables-save when on INPUT DENY: # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *mangle :PREROUTING ACCEPT [1430:156843] :INPUT ACCEPT [1430:156843] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1415:781598] :POSTROUTING ACCEPT [1415:781598] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *nat :PREROUTING ACCEPT [2:104] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1247:708906] -A INPUT -i lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT -A INPUT -p tcp -m comment --comment "Services - TCP" -m tcp -m multiport --dports 22,80,443,10000,20,21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m comment --comment "Minecraft - TCP" -m tcp --dport 25565 -j ACCEPT -A INPUT -p udp -m comment --comment "Minecraft - UDP" -m udp --dport 25565 -j ACCEPT -A INPUT -p tcp -m comment --comment "Source Dedicated - TCP" -m tcp --dport 27015 -j ACCEPT -A INPUT -p udp -m comment --comment "Source Dedicated - UDP" -m udp -m multiport --dports 4380,27000:27030 -j ACCEPT -A INPUT -p udp -m comment --comment "TS3 - UDP - main port" -m udp --dport 9987 -j ACCEPT -A INPUT -p tcp -m comment --comment "TS3 - TCP - ServerQuery" -m tcp --dport 10011 -j ACCEPT -A OUTPUT -o lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT COMMIT # Completed on Wed Apr 11 16:09:20 2012 iptables --list when on INPUT DENY: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ ACCEPT tcp -- anywhere anywhere /* Services - TCP */ tcp multiport dports ssh,www,https,webmin,ftp-data,ftp state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere /* Minecraft - TCP */ tcp dpt:25565 ACCEPT udp -- anywhere anywhere /* Minecraft - UDP */ udp dpt:25565 ACCEPT tcp -- anywhere anywhere /* Source Dedicated - TCP */ tcp dpt:27015 ACCEPT udp -- anywhere anywhere /* Source Dedicated - UDP */ udp multiport dports 4380,27000:27030 ACCEPT udp -- anywhere anywhere /* TS3 - UDP - main port */ udp dpt:9987 ACCEPT tcp -- anywhere anywhere /* TS3 - TCP - ServerQuery */ tcp dpt:10011 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ The UFW rules prior to purging on INPUT DENY: 127.0.0.1 ALLOW IN 127.0.0.1 3306 DENY IN Anywhere 20,21/tcp ALLOW IN Anywhere 22/tcp (OpenSSH) ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere 989 ALLOW IN Anywhere 990 ALLOW IN Anywhere 8075/tcp ALLOW IN Anywhere 9987/udp ALLOW IN Anywhere 10000/tcp ALLOW IN Anywhere 10011/tcp ALLOW IN Anywhere 25565/tcp ALLOW IN Anywhere 27000:27030/tcp ALLOW IN Anywhere 4380/udp ALLOW IN Anywhere 27014:27050/tcp ALLOW IN Anywhere 30033/tcp ALLOW IN Anywhere

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • How can I use WCF with only basichttpbinding, SSL and Basic Authentication in IIS?

    - by Tim
    Hello, Is it possible to setup a WCF service with SSL and Basic Authentication in IIS using only BasicHttpBinding-binding? (I can’t use the wsHttpBinding-binding) The site is hosted on IIS 7, with the following authentication set up: - Anonymous access: off - Basic authentication: on - Integrated Windows authentication: off !! Service Config: <services> <service name="NameSpace.SomeService"> <host> <baseAddresses> <add baseAddress="https://hostname/SomeService/" /> </baseAddresses> </host> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" bindingNamespace="http://hostname/SomeMethodName/1" contract="NameSpace.ISomeInterfaceService" name="Default" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> <exceptionShielding/> </behavior> </serviceBehaviors> </behaviors> I tried 2 types of bindings with two different errors: 1 - IIS Error: 'Could not find a base address that matches scheme http for the endpoint with binding BasicHttpBinding. Registered base address schemes are [https]. <bindings> <basicHttpBinding> <binding> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> 2 - IIS Error: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. <bindings> <basicHttpBinding> <binding> <security mode="Transport"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> Does somebody know how to configure this correctly? (if possible?)

    Read the article

  • Acrobat Reader ActiveX in WebBrowser stealing focus [C#]

    - by Maciej
    I'm using webBrowser.Navigate(url) control to display page. I noticed this action steals focus from current control (grid) and than I have problem to focus grid back (tired myGrid.Focus, .Select etc...) This is really annoying behaviour of browser... Does anyone knows how to prevent focus stealing by Browser or (if not) hot to force to focus control back ? EDIT: I've also tried webBrowser.DocumentCompleted event to focus back to grid EDIT 2 Good case to test this is openning PDF files webBrowser.Navigate(@"C:\TEMP\test.pdf") I believe this is ActiveX issue. On first glance it looks that this is not problem with focusing control but loosing entire Form focus... EDIT 3 I tried another approach: Form keyPress event: I thought I can capture form's keyPress and move focus from WebBrowser / AdobeReader ActiveX to my control. But surprisingly event is not fired! Looks Reader taken all control and there is no way to do anything programically until you do mouse click on (at least) form's caption Any advice (s) ?

    Read the article

  • How to diagnose cause, fix, or work around Adobe ActiveX / COM related error 0x80004005 progmaticall

    - by Streamline
    I've built a C# .NET app that uses the Adobe ActiveX control to display a PDF. It relies on a couple DLLs that get shipped with the application. These DLLs interact with the locally installed Adobe Acrobat or Adobe Acrobat Reader installed on the machine. This app is being used by some customer already and works great for nearly all users ( I check to see that the local machine is running at least version 9 of either Acrobat or Reader already ). I've found 3 cases where the app returns the error message "Error HRESULT E_FAIL has been returned from a call to a COM component" when trying to load (when the activex control is loading). I've checked one of these user's machines and he has Acrobat 9 installed and is using it frequently with no problems. It does appear that Acrobat 7 and 8 were installed at one time since there are entries for them in the registry along with Acrobat 9. I can't reproduce this problem locally, so I am not sure exactly which direction to go. The error at the top of the stacktrace is: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component. Some research into this error indicates it is a registry problem. Does anyone have a clue as to how to fix or work around this problem, or determine how to get to the core root of the problem? The full content of the error message is this: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.    at System.Windows.Forms.UnsafeNativeMethods.CoCreateInstance(Guid& clsid, Object punkOuter, Int32 context, Guid& iid)    at System.Windows.Forms.AxHost.CreateWithoutLicense(Guid clsid)    at System.Windows.Forms.AxHost.CreateWithLicense(String license, Guid clsid)    at System.Windows.Forms.AxHost.CreateInstanceCore(Guid clsid)    at System.Windows.Forms.AxHost.CreateInstance()    at System.Windows.Forms.AxHost.GetOcxCreate()    at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state)    at System.Windows.Forms.AxHost.CreateHandle()    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.AxHost.EndInit()    at AcrobatChecker.Viewer.InitializeComponent()    at AcrobatChecker.Viewer..ctor()    at AcrobatChecker.Form1.btnViewer_Click(Object sender, EventArgs e)    at System.Windows.Forms.Control.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)    at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)    at System.Windows.Forms.Control.WndProc(Message& m)    at System.Windows.Forms.ButtonBase.WndProc(Message& m)    at System.Windows.Forms.Button.WndProc(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Need additional help with binding multiple CommandParameters using MultiBinding

    - by Dave
    I need to have a command handler for a ToggleButton that can take multiple parameters, namely the IsChecked property of said ToggleButton, along with a constant value, which could be a string, byte, int... doesn't matter. I found this great question on SO and followed the answer's link and read up on MultiBinding and IMultiValueConverter. It went really smoothly until I had to write the MultiBinding, when I realized that I also need to pass a constant value and couldn't do something like <Binding Value="1" /> I then came across another similar question that Kent Boogaart answered, and then I started to think about ways that I could get around this. One possible way is to not use MVVM and simply add the Tag property to my ToggleButton, in which case my MultiBinding would look like this: <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" /> <Binding Path="Tag" /> </MultiBinding.Bindings> </MultiBinding> Kent had made a comment along the lines of, "if you're using MVVM you should be able to get around this issue". However, I'm not sure that's an option for me, even though I have adopted MVVM as my WPF pattern of necessity choice. The reason why I say this is that I have wayyyy more than one ToggleButton in the UserControl, and each of the ToggleButtons' Commands need to call the same function. But since they are ToggleButtons, I can't use the property bound to IsChecked in the ViewModel, because I don't know which one was last clicked. I suppose I could add another private property to keep track of this, but it seems a little silly. As far as the constant goes, I could probably get rid of this if I did the tracking idea, but not sure of any other way to get around it. Does anyone have good suggestions for me here? :) EDIT -- ok, so I need to update my bindings, which still don't work quite right: <ToggleButton Tag="1" Command="{Binding MyCommand}" Style="{StaticResource PassFailToggleButtonStyle}" HorizontalContentAlignment="Center" Background="Transparent" BorderBrush="Transparent" BorderThickness="0"> <ToggleButton.CommandParameter> <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" RelativeSource="{RelativeSource Mode=Self}" /> <Binding Path="Tag" RelativeSource="{RelativeSource Mode=Self}" /> </MultiBinding.Bindings> </MultiBinding> </ToggleButton.CommandParameter> </ToggleButton> IsChecked was working, but not Tag. I just realized that Tag is a string... duh. It's working now! The key was to use a RelativeSource of Self.

    Read the article

  • UserControl Focus Issue - Focus() sometimes returns false

    - by Craigger
    I have a user control that behaves similar to a tab control. The tab headers are UserControls that override Paint events to make them look custom. In order to leverage the Validating events on various controls on our tab pages, when the user clicks on the tab headers, we set the Focus to the TabHeader user control. I've noticed that Control.Focus() returns false sometimes but the documentation does not say why Control.Focus() will ever return false other than that the control can't receive focus. But I don't know why. Here's what i see. If my TabHeader UserControl does not contain any subcontrols, and I call myControl.Focus() from the MouseClick event, focus returns true. If my TabHeader UserControl contains a subcontrol, and I call myControl.Focus() from the MouseClick event, focus returns false. If my TabHeader UserControl contains a subcontrol, and I call myControl.subControl.Focus() from the myControl.MouseClick event, focus returns true. Can someone explain this?

    Read the article

  • Custom form designer, move/resize controls using WinAPI

    - by jonny
    I have to fix some problems and enchance form designer written long ago for a database project. In Design Panel class code I encountered these lines private void DesignPanel_MouseDown(object sender, MouseEventArgs e) { if (e.Button == MouseButtons.Left) { (sender as Control).Capture = false; switch (FMousePosition) { case MousePosition.mpNone: SendMessage((sender as Control).Handle, WM_SYSCOMMAND, 0xF009, 0); break;// Move case MousePosition.mpRightBottom: SendMessage((sender as Control).Handle, WM_SYSCOMMAND, 0xF008, 0); break;//RB case MousePosition.mpLeftBottom: SendMessage((sender as Control).Handle, WM_SYSCOMMAND, 0xF007, 0); // ... here are similar cases ... case MousePosition.mpLeft: SendMessage((sender as Control).Handle, WM_SYSCOMMAND, 0xF001, 0); break;//L } } } FMousePosition indicates whether mouse was over any edge of selected control. What confusing me is these windows messages: it seems there is no documentation on WM_SYSCOMMAND with parameters 0xF001-0xF009 (maybe it starts some kind of 'drag/resize sequence'). Any ideas? If my suggestion is right, then how can I cancel these sequences?

    Read the article

  • Running Jetty 7 server in eclipse?

    - by jh314159
    I'm trying to set up Eclipse to run and deploy my projects to a Jetty 7 server (the oldest version available from http://download.eclipse.org/jetty/). I've downloaded Jetty 7 and unpacked it, and I've installed the Jetty plugin from the available server adapters list, but when I try to configure a new Jetty server, the server type list only contains "Jetty 6". If I use this and point it at my server runtime, when I try to start it I get the following error: java.lang.NoClassDefFoundError: org/mortbay/start/Main Caused by: java.lang.ClassNotFoundException: org.mortbay.start.Main at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) Exception in thread "main" I'm guessing I need a different adaptor to start Jetty 7, but I have no idea where to find it.

    Read the article

  • Error when using TransformToAncestor: "The specified Visual is not an ancestor of this Visual."

    - by Brian Sullivan
    I'm trying to get the offset of a control relative to the top of its window, but I'm running into trouble when using the TransformToAncestor method of the control. Note: this code is in a value converter which will convert from a control to its relative Y position in relation to the window. public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { var ctrl = (Control) value; var win = Window.GetWindow(ctrl); var transform = ctrl.TransformToAncestor(win); // Exception thrown here. var pt = transform.Transform(new Point(0, 0)); return pt.Y; } The call to Window.GetWindow works just fine, and returns the correct window object inside which the control resides. Am I misunderstanding what WPF thinks of as an "ancestor"? I would think that given the result of GetWindow, that window would be an ancestor of the control. Are there certain nesting patters that would cause the line of ancestry to be cut off at a certain point?

    Read the article

  • Winform problem with autoscrolling of the ScrollableControl

    - by BobLim
    Hi guys, I have a problem with autoscrolling of the .NET ScrollableControl. I am using TabPage which inherited from ScrollableControl in the class hierarachy. Every TabPage object has only 1 UserControl derived control which draws the landscape; there is no other control on the tabpage. The usage of my application is its user will drag a file from windows explorer and drop into the TabPage. As more files are dragged and dropped, the UserControl derived control will expand to accomodate the drawing of the files and auto-scrolling will be enabled. The problem I have is when I mouse-click on the UserControl control, the vertical and horizontal scrollbars will scroll back to (0,0) position. I want the vertical and horizontal scrollbars to remain at their original scrolled position whatever happens. I believe when I mouse-click on the UserControl control, the UserControl control comes into focus and that triggers the auto-scrolling to (0,0) position. Please help. Thanks in advance!

    Read the article

  • jquery dynamic form plugin: adding nested field support

    - by goliatone
    Hi, Im using the jQuery dynamic form plugin, but i need support for nested field duplication. I would like some advice on how to modify the plugin to add such functionality. Im not a javascript/jQuery developer, so any advice on which route to take will be much appreciated. I can provide the plugin's code: /** * @author Stephane Roucheray * @extends jQuery */ jQuery.fn.dynamicForm = function (plusElmnt, minusElmnt, options){ var source = jQuery(this), minus = jQuery(minusElmnt), plus = jQuery(plusElmnt), template = source.clone(true), fieldId = 0, formFields = "input, checkbox, select, textarea", insertBefore = source.next(), clones = [], defaults = { duration:1000 }; // Extend default options with those provided options = $.extend(defaults, options); isPlusDescendentOfTemplate = source.find("*").filter(function(){ return this == plus.get(0); }); isPlusDescendentOfTemplate = isPlusDescendentOfTemplate.length > 0 ? true : false; function normalizeElmnt(elmnt){ elmnt.find(formFields).each(function(){ var nameAttr = jQuery(this).attr("name"), idAttr = jQuery(this).attr("id"); /* Normalize field name attributes */ if (!nameAttr) { jQuery(this).attr("name", "field" + fieldId + "[]"); } if (!/\[\]$/.exec(nameAttr)) { jQuery(this).attr("name", nameAttr + "[]"); } /* Normalize field id attributes */ if (idAttr) { /* Normalize attached label */ jQuery("label[for='"+idAttr+"']").each(function(){ jQuery(this).attr("for", idAttr + fieldId); }); jQuery(this).attr("id", idAttr + fieldId); } fieldId++; }); }; /* Hide minus element */ minus.hide(); /* If plus element is within the template */ if (isPlusDescendentOfTemplate) { function clickOnPlus(event){ var clone, currentClone = clones[clones.length -1] || source; event.preventDefault(); /* On first add, normalize source */ if (clones.length == 0) { normalizeElmnt(source); currentClone.find(minusElmnt).hide(); currentClone.find(plusElmnt).hide(); }else{ currentClone.find(plusElmnt).hide(); } /* Clone template and normalize it */ clone = template.clone(true).insertAfter(clones[clones.length - 1] || source); normalizeElmnt(clone); /* Normalize template id attribute */ if (clone.attr("id")) { clone.attr("id", clone.attr("id") + clones.length); } plus = clone.find(plusElmnt); minus = clone.find(minusElmnt); minus.get(0).removableClone = clone; minus.click(clickOnMinus); plus.click(clickOnPlus); if (options.limit && (options.limit - 2) > clones.length) { plus.show(); }else{ plus.hide(); } clones.push(clone); } function clickOnMinus(event){ event.preventDefault(); if (this.removableClone.effect && options.removeColor) { that = this; this.removableClone.effect("highlight", { color: options.removeColor }, options.duration, function(){that.removableClone.remove();}); } else { this.removableClone.remove(); } clones.splice(clones.indexOf(this.removableClone),1); if (clones.length == 0){ source.find(plusElmnt).show(); }else{ clones[clones.length -1].find(plusElmnt).show(); } } /* Handle click on plus */ plus.click(clickOnPlus); /* Handle click on minus */ minus.click(function(event){ }); }else{ /* If plus element is out of the template */ /* Handle click on plus */ plus.click(function(event){ var clone; event.preventDefault(); /* On first add, normalize source */ if (clones.length == 0) { normalizeElmnt(source); jQuery(minusElmnt).show(); } /* Clone template and normalize it */ clone = template.clone(true).insertAfter(clones[clones.length - 1] || source); if (clone.effect && options.createColor) { clone.effect("highlight", {color:options.createColor}, options.duration); } normalizeElmnt(clone); /* Normalize template id attribute */ if (clone.attr("id")) { clone.attr("id", clone.attr("id") + clones.length); } if (options.limit && (options.limit - 3) < clones.length) { plus.hide(); } clones.push(clone); }); /* Handle click on minus */ minus.click(function(event){ event.preventDefault(); var clone = clones.pop(); if (clones.length >= 0) { if (clone.effect && options.removeColor) { that = this; clone.effect("highlight", { color: options.removeColor, mode:"hide" }, options.duration, function(){clone.remove();}); } else { clone.remove(); } } if (clones.length == 0) { jQuery(minusElmnt).hide(); } plus.show(); }); } };

    Read the article

  • RenderControl error

    - by Rubans
    HI, I have a grid with which I'm templating columns with and one column is a user control. Since I'm adding these in the code behind ( using ITemplate), I get an error when I try to render the UserControl. This particular user control does require a scriptmanager since it includes a ajax control (radcombobox). The error I get is : Script control 'LookupComboBox' is not a registered script control. Script controls must be registered using RegisterScriptControl() before calling RegisterScriptDescriptors(). I'm not sure even if my approach is correct using user control and using RenderControl.

    Read the article

  • error executing aapt, all of the sudden

    - by pjv
    I know there are a lot of these topics around but none seem to help in my case, nor describe it exactly. The best similar one is aapt not found under the right path. My problem is that I can be using Eclipse for a whole evening programming, compiling and using my device, and then suddenly I get "error executing aapt" for my current project and ofcourse R.java isn't (properly) generated anymore. I then restart Eclipse and everything goes away. I'm seeing this once a day on average however. I've recently switched to amd64 and installed the latest Android-2.3 SDK and matching tools. I know that there is now a platform-tools folder that has an aapt version that should work SDK version independently. At first I had added this directory to my PATH, as instructed on the SDK website. I've also tried not adding it to my path and making a link platforms/android-9/tools so that every SDK version might use it's own old copy. Needless to say, platform-tools/aapt is there and has the right permissions, and I have been able to execute it on the command-line at any time. When I do write a faulty xml file or sorts, and appropriately get an error, I see an extra line that says "aapt: /lib32/libz.so.1: no version information available". I'm running a recent Gentoo linux system. I have everything installed to support x86 on amd64, but have re-emerged emul-linux-x86-baselibs and zlib just to be sure. The problem persists. I do see some pages that spell horror over some zlib bugs, but I'm not sure if that's related. I realize I'm not on the reference Ubuntu platform, but surely the difference cannot be that great? It might very well be a bug in aapt or the tools itself. Why would it suddenly stop working? I also experience that the ids in R.java were incorrect, namely that simple findViewById() code would give ClassCastExceptions because of mixed ids the one time, and then work perfectly without any changes bu only a "clean project", in the aftermath of a failing aapt. Finally, I've run a few commands on aapt, that don't seem to add any extra information: #ldd aapt ./aapt: /lib32/libz.so.1: no version information available (required by ./aapt) linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib32/librt.so.1 (0x4f864000) libpthread.so.0 => /lib32/libpthread.so.0 (0x4f849000) libz.so.1 => /lib32/libz.so.1 (0xf7707000) libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.4.4/32/libstdc++.so.6 (0x415e9000) libm.so.6 => /lib32/libm.so.6 (0x4f876000) libgcc_s.so.1 => /lib32/libgcc_s.so.1 (0x4fac6000) libc.so.6 => /lib32/libc.so.6 (0x4f5ed000) /lib/ld-linux.so.2 (0x4f5ca000) #file aapt aapt: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped Can anybody tell anything wrong with my configuration? Does it smell like a bug perhaps (otherwise let's report it (again))? Update 2010-01-06: I've gained some more knowledge. When I recently was trying to export a signed apk, I ran into another error message (full details from the Eclipse error view) regarding aapt I hadn't seen before. Note here too, that I can just restart Eclipse and can export apks again without problems, at least for a little while. I'm starting to think it is related to lack of memory on my system. The message "onvoldoende geheugen beschikbaar" means "insufficient memory available". I have also been seeing insufficient memory errors in DDMS when I'm dumping HPROF files. Here is the error log (shortened): !ENTRY com.android.ide.eclipse.adt 4 0 2011-01-05 23:11:16.097 !MESSAGE Export Wizard Error !STACK 1 org.eclipse.core.runtime.CoreException: Failed to export application at com.android.ide.eclipse.adt.internal.project.ExportHelper.exportReleaseApk(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.doExport(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.access$0(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard$1.run(Unknown Source) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: com.android.ide.eclipse.adt.internal.build.AaptExecException: Error executing aapt. Please check aapt is present at /opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeAapt(Unknown Source) at com.android.ide.eclipse.adt.internal.build.BuildHelper.packageResources(Unknown Source) ... 5 more Caused by: java.io.IOException: Cannot run program "/opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt": java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... Caused by: java.io.IOException: java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... !SUBENTRY 1 com.android.ide.eclipse.adt 4 0 2011-01-05 23:11:16.098 !MESSAGE Failed to export application !STACK 0 com.android.ide.eclipse.adt.internal.build.AaptExecException: Error executing aapt. Please check aapt is present at /opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeAapt(Unknown Source) at com.android.ide.eclipse.adt.internal.build.BuildHelper.packageResources(Unknown Source) at com.android.ide.eclipse.adt.internal.project.ExportHelper.exportReleaseApk(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.doExport(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.access$0(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard$1.run(Unknown Source) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: java.io.IOException: Cannot run program "/opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt": java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... Caused by: java.io.IOException: java.io.IOException: error=12, Onvoldoende geheugen beschikbaar

    Read the article

  • Copy task in Visual Studio 2008

    - by Maurizio Reginelli
    This is a question related to a previous question I posted (see here). I need to configure a .csproj file to copy some files from a directory to another one (let me call them SOURCE and DESTINATION). I also need to change the SOURCE path depending from the configuration I'm using. For example, if I compile my project in Debug mode, the SOURCE path must contain a subfolder called Debug. I tried the solution proposed by Schmitt in the previous post, using $(ConfigurationName) to set the dynamic directory into the SOURCE path. When I opened the solution containing that project, a list of links to the source files appeared in the main tree of the project and they were correctly related to the Debug mode. But when I changed to the Release mode, I saw that the path of the linked source files were set again to the Debug version. Is there a way to specify a parameter in the Include attribute of the SourceFiles element? Thank you.

    Read the article

  • Please help me correct the small bugs in this image editor

    - by Alex
    Hi, I'm working on a website that will sell hand made jewelry and I'm finishing the image editor, but it's not behaving quite right. Basically, the user uploads an image which will be saved as a source and then it will be resized to fit the user's screen and saved as a temp. The user will then go to a screen that will allow them to crop the image and then save it to it's final versions. All of that works fine, except, the final versions have 3 bugs. First is some black horizontal line on the very bottom of the image. Second is an outline of sorts that follows the edges. I thought it was because I was reducing the quality, but even at 100% it still shows up... And lastly, I've noticed that the cropped image is always a couple of pixels lower than what I'm specifying... Anyway, I'm hoping someone whose got experience in editing images with C# can maybe take a look at the code and see where I might be going off the right path. Oh, by the way, this in an ASP.NET MVC application. Here's the code: using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Drawing.Imaging; using System.IO; using System.Linq; using System.Web; namespace Website.Models.Providers { public class ImageProvider { private readonly ProductProvider ProductProvider = null; private readonly EncoderParameters HighQualityEncoder = new EncoderParameters(); private readonly ImageCodecInfo JpegCodecInfo = ImageCodecInfo.GetImageEncoders().Single( c => (c.MimeType == "image/jpeg")); private readonly string Path = HttpContext.Current.Server.MapPath("~/Resources/Images/Products"); private readonly short[][] Dimensions = new short[3][] { new short[2] { 640, 480 }, new short[2] { 240, 0 }, new short[2] { 80, 60 } }; ////////////////////////////////////////////////////////// // Constructor ////////////////////////////////////////// ////////////////////////////////////////////////////////// public ImageProvider( ProductProvider ProductProvider) { this.ProductProvider = ProductProvider; HighQualityEncoder.Param[0] = new EncoderParameter(Encoder.Quality, 100L); } ////////////////////////////////////////////////////////// // Crop ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void Crop( string FileName, Image Image, Crop Crop) { using (Bitmap Source = new Bitmap(Image)) { using (Bitmap Target = new Bitmap(Crop.Width, Crop.Height)) { using (Graphics Graphics = Graphics.FromImage(Target)) { Graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; Graphics.SmoothingMode = SmoothingMode.HighQuality; Graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; Graphics.CompositingQuality = CompositingQuality.HighQuality; Graphics.DrawImage(Source, new Rectangle(0, 0, Target.Width, Target.Height), new Rectangle(Crop.Left, Crop.Top, Crop.Width, Crop.Height), GraphicsUnit.Pixel); }; Target.Save(FileName, JpegCodecInfo, HighQualityEncoder); }; }; } ////////////////////////////////////////////////////////// // Crop & Resize ////////////////////////////////////// ////////////////////////////////////////////////////////// public void CropAndResize( Product Product, Crop Crop) { using (Image Source = Image.FromFile(String.Format("{0}/{1}.source", Path, Product.ProductId))) { using (Image Temp = Image.FromFile(String.Format("{0}/{1}.temp", Path, Product.ProductId))) { float Percent = ((float)Source.Width / (float)Temp.Width); short Width = (short)(Temp.Width * Percent); short Height = (short)(Temp.Height * Percent); Crop.Height = (short)(Crop.Height * Percent); Crop.Left = (short)(Crop.Left * Percent); Crop.Top = (short)(Crop.Top * Percent); Crop.Width = (short)(Crop.Width * Percent); Img Img = new Img(); this.ProductProvider.AddImageAndSave(Product, Img); this.Crop(String.Format("{0}/{1}.cropped", Path, Img.ImageId), Source, Crop); using (Image Cropped = Image.FromFile(String.Format("{0}/{1}.cropped", Path, Img.ImageId))) { this.Resize(this.Dimensions[0], String.Format("{0}/{1}-L.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); this.Resize(this.Dimensions[1], String.Format("{0}/{1}-T.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); this.Resize(this.Dimensions[2], String.Format("{0}/{1}-S.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); }; }; }; this.Purge(Product); } ////////////////////////////////////////////////////////// // Queue ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void QueueFor( Product Product, HttpPostedFileBase PostedFile) { using (Image Image = Image.FromStream(PostedFile.InputStream)) { this.Resize(new short[2] { 1152, 0 }, String.Format("{0}/{1}.temp", Path, Product.ProductId), Image, HighQualityEncoder); }; PostedFile.SaveAs(String.Format("{0}/{1}.source", Path, Product.ProductId)); } ////////////////////////////////////////////////////////// // Purge ////////////////////////////////////////////// ////////////////////////////////////////////////////////// private void Purge( Product Product) { string Source = String.Format("{0}/{1}.source", Path, Product.ProductId); string Temp = String.Format("{0}/{1}.temp", Path, Product.ProductId); if (File.Exists(Source)) { File.Delete(Source); }; if (File.Exists(Temp)) { File.Delete(Temp); }; foreach (Img Img in Product.Imgs) { string Cropped = String.Format("{0}/{1}.cropped", Path, Img.ImageId); if (File.Exists(Cropped)) { File.Delete(Cropped); }; }; } ////////////////////////////////////////////////////////// // Resize ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void Resize( short[] Dimensions, string FileName, Image Image, EncoderParameters EncoderParameters) { if (Dimensions[1] == 0) { Dimensions[1] = (short)(Image.Height / ((float)Image.Width / (float)Dimensions[0])); }; using (Bitmap Bitmap = new Bitmap(Dimensions[0], Dimensions[1])) { using (Graphics Graphics = Graphics.FromImage(Bitmap)) { Graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; Graphics.SmoothingMode = SmoothingMode.HighQuality; Graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; Graphics.CompositingQuality = CompositingQuality.HighQuality; Graphics.DrawImage(Image, 0, 0, Dimensions[0], Dimensions[1]); }; Bitmap.Save(FileName, JpegCodecInfo, EncoderParameters); }; } } } Here's one of the images this produces:

    Read the article

  • errors deploying an EAR with EJBs 2.1 into JBoss AS5

    - by Marina
    Hi, I'm porting an application with EJBs 2.1 from Weblogic9 to JBoss AS5. I have made some of the changes like adding jboss.xml descriptors to EJBs and fixing application.xml of the EAR, but there are still problems when deploying the EAR. Here is a summary of the the latest error I'm getting when the first EJB is being deployed by JBoss (I will add the full stack trace at the end of the message): 14:15:48,124 ERROR [AbstractKernelController] Error installing to Parse: name=vf sfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/ state =Not Installed mode=Manual requiredState=Parse org.jboss.deployers.spi.DeploymentException: Error creating managed object for v fsfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/admin -ejb.jar/ .... Caused by: org.jboss.xb.binding.JBossXBException: Failed to parse source: Failed to parse schema for nsURI=, baseURI=null, schemaLocation=http://www.jboss.org/j2ee/dtd/jboss_2_4.dtd .... Caused by: org.jboss.xb.binding.JBossXBRuntimeException: -1:-1 94:3 The markup in the document preceding the root element must be well-formed. Is this a problem with parsing the jboss_2_4.dtd itself? or is it something worng with my descriptors for the EJB? When I try to validate the jboss_2_4.dtd in an XML editor it does complain about a syntax error at line 94:1 , which is the beginning of the first declaration, although it looks fine. Any ideas? Thanks! Marina Full error stack trace: 14:15:48,124 ERROR [AbstractKernelController] Error installing to Parse: name=vf sfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/ state =Not Installed mode=Manual requiredState=Parse org.jboss.deployers.spi.DeploymentException: Error creating managed object for v fsfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/admin -ejb.jar/ at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentExcept ion(DeploymentException.java:49) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:362) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:322) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:294) at org.jboss.deployment.JBossEjbParsingDeployer.createMetaData(JBossEjbP arsingDeployer.java:95) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.deploy(AbstractParsingDeployerWithOutput.java:234) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(Deployer Wrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(Deployer sImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFi rst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFi rst(DeployersImpl.java:1210) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(Deployers Impl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(Abstra ctControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractContr oller.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(Abstra ctController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(Deployers Impl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeploye rImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter .process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction .install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileActi on.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileServ ice.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(Abstra ctControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractContr oller.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(Abstra ctController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileServ ice.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start( ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java: 461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) Caused by: org.jboss.xb.binding.JBossXBException: Failed to parse source: Failed to parse schema for nsURI=, baseURI=null, schemaLocation=http://www.jboss.org/j 2ee/dtd/jboss_2_4.dtd at org.jboss.xb.binding.parser.sax.SaxJBossXBParser.parse(SaxJBossXBPars er.java:203) at org.jboss.xb.binding.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java :168) at org.jboss.xb.util.JBossXBHelper.parse(JBossXBHelper.java:189) at org.jboss.xb.util.JBossXBHelper.parse(JBossXBHelper.java:166) at org.jboss.deployers.vfs.spi.deployer.SchemaResolverDeployer.parse(Sch emaResolverDeployer.java:137) at org.jboss.deployers.vfs.spi.deployer.SchemaResolverDeployer.parse(Sch emaResolverDeployer.java:121) at org.jboss.deployers.vfs.spi.deployer.AbstractVFSParsingDeployer.parse AndInit(AbstractVFSParsingDeployer.java:256) at org.jboss.deployers.vfs.spi.deployer.AbstractVFSParsingDeployer.parse (AbstractVFSParsingDeployer.java:188) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:348) ... 35 more Caused by: org.jboss.xb.binding.JBossXBRuntimeException: Failed to parse schema for nsURI=, baseURI=null, schemaLocation=http://www.jboss.org/j2ee/dtd/jboss_2_4 .dtd at org.jboss.xb.binding.resolver.AbstractMutableSchemaResolver.resolve(A bstractMutableSchemaResolver.java:293) at org.jboss.xb.binding.sunday.unmarshalling.SundayContentHandler.startE lement(SundayContentHandler.java:274) at org.jboss.xb.binding.parser.sax.SaxJBossXBParser$DelegatingContentHan dler.startElement(SaxJBossXBParser.java:401) at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Sour ce) at org.apache.xerces.xinclude.XIncludeHandler.startElement(Unknown Sourc e) at org.apache.xerces.impl.dtd.XMLDTDValidator.startElement(Unknown Sourc e) at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unkn own Source) at org.apache.xerces.impl.XMLNSDocumentScannerImpl$NSContentDispatcher.s canRootElementHook(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContent Dispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Un known Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Sour ce) at org.jboss.xb.binding.parser.sax.SaxJBossXBParser.parse(SaxJBossXBPars er.java:199) ... 43 more Caused by: org.jboss.xb.binding.JBossXBRuntimeException: -1:-1 94:3 The markup i n the document preceding the root element must be well-formed. at org.jboss.xb.binding.sunday.unmarshalling.XsdBinderTerminatingErrorHa ndler.handleError(XsdBinderTerminatingErrorHandler.java:40) at org.apache.xerces.impl.xs.XMLSchemaLoader.reportDOMFatalError(Unknown Source) at org.apache.xerces.impl.xs.XSLoaderImpl.load(Unknown Source) at org.jboss.xb.binding.Util.loadSchema(Util.java:395) at org.jboss.xb.binding.sunday.unmarshalling.XsdBinder.bind(XsdBinder.ja va:176) at org.jboss.xb.binding.sunday.unmarshalling.XsdBinder.bind(XsdBinder.ja va:147) at org.jboss.xb.binding.resolver.AbstractMutableSchemaResolver.resolve(A bstractMutableSchemaResolver.java:285) ... 58 more

    Read the article

  • Populate InfoPath Combo Box

    - by fwm
    I am trying to populate a drop down and found this page, which provides part of the solution: http://www.bizsupportonline.net/infopath2007/programmatically-fill-populate-drop-down-list-box-infopath-2007.htm I got this to work, but if players.xml was changed, the drop down list didn't reflect it. I had to rename the file and add the new file as my secondary data source to see the change. For my purposes, the xml file that is used to populate the drop down box would need to be created anew. Each time the form is loaded, I programatically create a new xml file, but the drop down doesn't pick up the new file, even though I have named it as the secondary data source. It seems that once you set a file to be your secondary data source, it takes a snapshot of that file at the time the source was added as a secondary data source and if you change the file, you must rename and add it as your secondary source. How do I populate a drop down box from an XML file such that if the XML file changes, the contents of the drop down change accordingly?

    Read the article

  • Master Details and collectionViewSource in separate views cannot make it work.

    - by devnet247
    Hi all, I really cannot seem to make/understand how it works with separate views It all works fine if a bundle all together in a single window. I have a list of Countries-Cities-etc... When you select a country it should load it's cities. Works So I bind 3 listboxes successfully using collection sources and no codebehind more or less (just code to set the datacontext and selectionChanged). you can download the project here http://cid-9db5ae91a2948485.skydrive.live.com/self.aspx/PublicFolder/2MasterDetails.zip <Window.Resources> <CollectionViewSource Source="{Binding}" x:Key="cvsCountryList"/> <CollectionViewSource Source="{Binding Source={StaticResource cvsCountryList},Path=Cities}" x:Key="cvsCityList"/> <CollectionViewSource Source="{Binding Source={StaticResource cvsCityList},Path=Hotels}" x:Key="cvsHotelList"/> </Window.Resources> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Grid.Column="0" Grid.Row="0" Text="Countries"/> <TextBlock Grid.Column="1" Grid.Row="0" Text="Cities"/> <TextBlock Grid.Column="2" Grid.Row="0" Text="Hotels"/> <ListBox Grid.Column="0" Grid.Row="1" Name="lstCountries" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Source={StaticResource cvsCountryList}}" DisplayMemberPath="Name" SelectionChanged="OnSelectionChanged"/> <ListBox Grid.Column="1" Grid.Row="1" Name="lstCities" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Source={StaticResource cvsCityList}}" DisplayMemberPath="Name" SelectionChanged="OnSelectionChanged"/> <ListBox Grid.Column="2" Grid.Row="1" Name="lstHotels" ItemsSource="{Binding Source={StaticResource cvsHotelList}}" DisplayMemberPath="Name" SelectionChanged="OnSelectionChanged"/> </Grid> Does not work I am trying to implement the same using a view for each eg(LeftSideMasterControl -RightSideDetailsControls) However I cannot seem to make them bind. Can you help? I would be very grateful so that I can understand how you communicate between userControls You can download the project here. http://cid-9db5ae91a2948485.skydrive.live.com/self.aspx/PublicFolder/2MasterDetails.zip I have as follows LeftSideMasterControl.xaml <Grid> <ListBox Name="lstCountries" SelectionChanged="OnSelectionChanged" DisplayMemberPath="Name" ItemsSource="{Binding Countries}"/> </Grid> RightViewDetailsControl.xaml MainView.xaml <CollectionViewSource Source="{Binding}" x:Key="cvsCountryList"/> <CollectionViewSource Source="{Binding Source={StaticResource cvsCountryList},Path=Cities}" x:Key="cvsCityList"/> </UserControl.Resources> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="3*"/> </Grid.ColumnDefinitions> <Views:LeftViewMasterControl x:Name="leftSide" Margin="5" Content="{Binding Source=}"/> <GridSplitter Grid.Column="0" Grid.Row="0" Background="LightGray"/> <Views:RightViewDetailsControl Grid.Column="1" x:Name="RightSide" Margin="5"/> </Grid> ViewModels public class CountryListVM : ViewModelBase { public CountryListVM() { Countries = new ObservableCollection<CountryVM>(); } public ObservableCollection<CountryVM> Countries { get; set; } private RelayCommand _loadCountriesCommand; public ICommand LoadCountriesCommand { get { return _loadCountriesCommand ?? (_loadCountriesCommand = new RelayCommand(x => LoadCountries(), x => CanLoadCountries)); } } private static bool CanLoadCountries { get { return true; } } private void LoadCountries() { var countryList = Repository.GetCountries(); foreach (var country in countryList) { Countries.Add(new CountryVM { Name = country.Name }); } } } public class CountryVM : ViewModelBase { private string _name; public string Name { get { return _name; } set { _name = value; OnPropertyChanged("Name"); } } } public class CityListVM : ViewModelBase { private CountryVM _selectedCountry; public CityListVM(CountryVM country) { SelectedCountry = country; Cities = new ObservableCollection<CityVM>(); } public ObservableCollection<CityVM> Cities { get; set; } public CountryVM SelectedCountry { get { return _selectedCountry; } set { _selectedCountry = value; OnPropertyChanged("SelectedCountry"); } } private RelayCommand _loadCitiesCommand; public ICommand LoadCitiesCommand { get { return _loadCitiesCommand ?? (_loadCitiesCommand = new RelayCommand(x => LoadCities(), x => CanLoadCities)); } } private static bool CanLoadCities { get { return true; } } private void LoadCities() { var cities = Repository.GetCities(SelectedCountry.Name); foreach (var city in cities) { Cities.Add(new CityVM() { Name = city.Name }); } } } public class CityVM : ViewModelBase { private string _name; public string Name { get { return _name; } set { _name = value; OnPropertyChanged("Name"); } } } Models ========= public class Country { public Country() { Cities = new ObservableCollection<City>(); } public string Name { get; set; } public ObservableCollection<City> Cities { get; set; } } public class City { public City() { Hotels = new ObservableCollection<Hotel>(); } public string Name { get; set; } public ObservableCollection<Hotel> Hotels { get; set; } } public class Hotel { public string Name { get; set; } }

    Read the article

  • Handling Exceptions for ThreadPoolExecutor

    - by HonorGod
    I have the following code snippet that basically scans through the list of task that needs to be executed and each task is then given to the executor for execution. The JobExecutor intern creates another executor (for doing db stuff...reading and writing data to queue) and completes the task. JobExecutor returns a Future for the tasks submitted. When one of the task fails, I want to gracefully interrupt all the threads and shutdown the executor by catching all the exceptions. What changes do I need to do? public class DataMovingClass { private static final AtomicInteger uniqueId = new AtomicInteger(0); private static final ThreadLocal<Integer> uniqueNumber = new IDGenerator(); ThreadPoolExecutor threadPoolExecutor = null ; private List<Source> sources = new ArrayList<Source>(); private static class IDGenerator extends ThreadLocal<Integer> { @Override public Integer get() { return uniqueId.incrementAndGet(); } } public void init(){ // load sources list } public boolean execute() { boolean succcess = true ; threadPoolExecutor = new ThreadPoolExecutor(10,10, 10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("DataMigration-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); List<Future<Boolean>> result = new ArrayList<Future<Boolean>>(); for (Source source : sources) { result.add(threadPoolExecutor.submit(new JobExecutor(source))); } for (Future<Boolean> jobDone : result) { try { if (!jobDone.get(100000, TimeUnit.SECONDS) && success) { // in case of successful DbWriterClass, we don't need to change // it. success = false; } } catch (Exception ex) { // handle exceptions } } } public class JobExecutor implements Callable<Boolean> { private ThreadPoolExecutor threadPoolExecutor ; Source jobSource ; public SourceJobExecutor(Source source) { this.jobSource = source; threadPoolExecutor = new ThreadPoolExecutor(10,10,10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("Job Executor-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); } public Boolean call() throws Exception { boolean status = true ; System.out.println("Starting Job = " + jobSource.getName()); try { // do the specified task ; }catch (InterruptedException intrEx) { logger.warn("InterruptedException", intrEx); status = false ; } catch(Exception e) { logger.fatal("Exception occurred while executing task "+jobSource.getName(),e); status = false ; } System.out.println("Ending Job = " + jobSource.getName()); return status ; } } }

    Read the article

  • Admin user always prehend initial user

    - by StepH
    Using an InnoSetup script (that seems to work fine under XP/Vista), i've a strange behavior under Seven RC: here is the [Files] section: [Files] Source: *.ico; DestDir: {app}\bin; Flags: ignoreversion Source: dist\*.*; DestDir: {app}\bin; Flags: ignoreversion Source: catalog\*.*; DestDir: {userappdata}\JetWorksheet\catalog; Flags: recursesubdirs createallsubdirs onlyifdoesntexist uninsneveruninstall Source: wizards\*.*; DestDir: {userappdata}\JetWorksheet\wizards; Flags: recursesubdirs createallsubdirs onlyifdoesntexist uninsneveruninstall Source: images\*.*; DestDir: {userdocs}\JetWorksheet\images; Flags: recursesubdirs createallsubdirs Source: wordlists\*.*; DestDir: {userdocs}\JetWorksheet\wordlists; Flags: recursesubdirs createallsubdirs The problem is: In place of using the {userappdata} of the user that started the setup, all the data goes to the "Admin" directories... I'm surely missing somethings...

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >