Search Results

Search found 9271 results on 371 pages for 'properties'.

Page 93/371 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Override methods should call base method?

    - by Trevor Pilley
    I'm just running NDepend against some code that I have written and one of the warnings is Overrides of Method() should call base.Method(). The places this occurs are where I have a base class which has virtual properties and methods with default behaviour but which can be overridden by a class which inherits from the base class and doesn't call the overridden method. For example, in the base class I have a property defined like this: protected virtual char CloseQuote { get { return '"'; } } And then in an inheriting class which uses a different close quote: protected override char CloseQuote { get { return ']'; } } Not all classes which inherit from the base class use different quote characters hence my initial design. The alternatives I thought of were have get/set properties in the base class with the defaults set in the constructor: protected BaseClass() { this.CloseQuote = '"'; } protected char CloseQuote { get; set; } public InheritingClass() { this.CloseQuote = ']'; } Or make the base class require the values as constructor args: protected BaseClass(char closeQuote, ...) { this.CloseQuote = '"'; } protected char CloseQuote { get; private set; } public InheritingClass() base (closeQuote: ']', ...) { } Should I use virtual in a scenario where the base implementation may be replaced instead of extended or should I opt for one of the alternatives I thought of? If so, which would be preferable and why?

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

  • Should static parameters in an API be part of each method?

    - by jschoen
    I am currently creating a library that is a wrapper for an online API. The obvious end goal is to make it as easy for others to use as possible. As such I am trying to determine the best approach when it comes to common parameters for the API. In my current situation there are 3 (consumer key, consumer secret, and and authorization token). They are essentially needed in every API call. My question is should I make these 3 parameters required for each method or is there a better way. I see my current options as being: Place the parameters in each method call public ApiObject callMethod(String consumerKey, String consumerSecret, String token, ...) This one seems reasonable, but seems awfully repetitive to me. Create a singleton class that the user must initialize before calling any api methods. This seems wrong, and would essentially limit them to accessing one account at a time via the API (which may be reasonable, I dunno). Make them place them in a properties file in their project. That way I can load the properties that way and store them. This seems similar to the singleton to me, but they would not have to explicitly call something to initialize these values. Is there another option I am not seeing, or a more common practice in this situation that I should be following?

    Read the article

  • Lenovo ThinkPad W530 problem to activate the optical/DVD drive

    - by Marko Apfel
    Problem Sometimes my notebook shows the optical drive as power off: But the hint there is not changing this state. Solution By looking in the device manager you see the next problem: So open the properties via right mouse click. This gives you the hint to remove the drive first. “Windows cannot use this hardware device because it has been prepared for "safe removal", but it has not been removed from the computer. (Code 47)” Whether you select the comment by dragging the mouse over the the hidden part or pressing the button “Properties”. So we unplug and reinsert the ultrabay. If you think, now the system is working – you are wrong. Now the system is the meaning, that the ultrabay is unplugged. You could verify this by refresh the view in the device panel. Now there is no longer our device. Yet your great gig comes – unplug the ultra bay and reinsert it a second time! After this you could hear with a media inside, that the motor is really started and we have a working device What a difficult birth …

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • Problem with Java Mail : No provider for smtp

    - by user359198
    Hello all. I am using JavaMail to do a simple application that sends an email when it finds some files in a directory. I managed to get it worked from Eclipse. I Run the application and it sent the email with no errors. But, when I created the jar, and executed it, it fails in the email sending part. It gives this exception. javax.mail.NoSuchProviderException: No provider for smtp at javax.mail.Session.getProvider(Session.java:460) at javax.mail.Session.getTransport(Session.java:655) at javax.mail.Session.getTransport(Session.java:636) at main.java.util.MailManager.sendMail(MailManager.java:69) at main.java.DownloadsMail.composeAndSendMail(DownloadsMail.java:16) at main.java.DownloadsController.checkDownloads(DownloadsController.java:51) at main.java.MainDownloadsController.run(MainDownloadsController.java:26) at java.lang.Thread.run(Unknown Source) I am using the library in this method: public static boolean sendMail(String subject, String text){ noExceptionsThrown = true; try { loadProperties(); } catch (IOException e1) { System.out.println("Problem encountered while loading properties"); e1.printStackTrace(); noExceptionsThrown = false; } Properties mailProps = new Properties(); String host = "mail.smtp.host"; mailProps.setProperty(host, connectionProps.getProperty(host)); String tls = "mail.smtp.starttls.enable"; mailProps.setProperty(tls, connectionProps.getProperty(tls)); String port = "mail.smtp.port"; mailProps.setProperty(port, connectionProps.getProperty(port)); String user = "mail.smtp.user"; mailProps.setProperty(user, connectionProps.getProperty(user)); String auth = "mail.smtp.auth"; mailProps.setProperty(auth, connectionProps.getProperty(auth)); Session session = Session.getDefaultInstance(mailProps); //session.setDebug(true); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(messageProps.getProperty("from"))); message.addRecipient(Message.RecipientType.TO, new InternetAddress( messageProps.getProperty("to"))); message.setSubject(subject); message.setText(text); Transport t = session.getTransport("smtp"); try { t.connect(connectionProps.getProperty("user"), passwordProps .getProperty("password")); t.sendMessage(message, message.getAllRecipients()); } catch (Exception e) { System.out.println("Error encountered while sending the email"); e.printStackTrace(); noExceptionsThrown = false; } finally { t.close(); } } catch (Exception e) { System.out.println("Error encountered while creating the message"); e.printStackTrace(); noExceptionsThrown = false; } return noExceptionsThrown; } I am loading these values from properties files. mail.smtp.host=smtp.gmail.com mail.smtp.starttls.enable=true mail.smtp.port=587 mail.smtp.auth=true I have tried to change the host by ssl://smtp.gmail.com, the port by 465 (just for trying something different), but it doesn't work either. Anyway, if it works fine from Eclipse with the original parameters, I guess that the values are correct, but the problem is creating the jar. I don't know very much about the possible results or changes when creating a jar. Could the JavaMail libraries someway go wrong when the jar is created? Do you have any ideas? Thank you very much for your help.

    Read the article

  • Get User SID From Logon ID (Windows XP and Up)

    - by Dave Ruske
    I have a Windows service that needs to access registry hives under HKEY_USERS when users log on, either locally or via Terminal Server. I'm using a WMI query on win32_logonsession to receive events when users log on, and one of the properties I get from that query is a LogonId. To figure out which registry hive I need to access, now, I need the users's SID, which is used as a registry key name beneath HKEY_USERS. In most cases, I can get this by doing a RelatedObjectQuery like so (in C#): RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); where "logonID" is the logon session ID from the session query. Running the RelatedObjectQuery will generally give me a SID property that contains exactly what I need. There are two issues I have with this. First and most importantly, the RelatedObjectQuery will not return any results for a domain user that logs in with cached credentials, disconnected from the domain. Second, I'm not pleased with the performance of this RelatedObjectQuery --- it can take up to several seconds to execute. Here's a quick and dirty command line program I threw together to experiment with the queries. Rather than setting up to receive events, this just enumerates the users on the local machine: using System; using System.Collections.Generic; using System.Text; using System.Management; namespace EnumUsersTest { class Program { static void Main( string[] args ) { ManagementScope scope = new ManagementScope( "\\\\.\\root\\cimv2" ); string queryString = "select * from win32_logonsession"; // for all sessions //string queryString = "select * from win32_logonsession where logontype = 2"; // for local interactive sessions only ManagementObjectSearcher sessionQuery = new ManagementObjectSearcher( scope, new SelectQuery( queryString ) ); ManagementObjectCollection logonSessions = sessionQuery.Get(); foreach ( ManagementObject logonSession in logonSessions ) { string logonID = logonSession["LogonId"].ToString(); Console.WriteLine( "=== {0}, type {1} ===", logonID, logonSession["LogonType"].ToString() ); RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); ManagementObjectSearcher userQuery = new ManagementObjectSearcher( scope, relatedQuery ); ManagementObjectCollection users = userQuery.Get(); foreach ( ManagementObject user in users ) { PrintProperties( user.Properties ); } } Console.WriteLine( "\nDone! Press a key to exit..." ); Console.ReadKey( true ); } private static void PrintProperty( PropertyData pd ) { string value = "null"; string valueType = "n/a"; if ( null == pd.Value ) value = "null"; if ( pd.Value != null ) { value = pd.Value.ToString(); valueType = pd.Value.GetType().ToString(); } Console.WriteLine( " \"{0}\" = ({1}) \"{2}\"", pd.Name, valueType, value ); } private static void PrintProperties( PropertyDataCollection properties ) { foreach ( PropertyData pd in properties ) { PrintProperty( pd ); } } } } So... is there way to quickly and reliably obtain the user SID given the information I retrieve from WMI, or should I be looking at using something like SENS instead?

    Read the article

  • CrossPage PostBack in series of web pages

    - by Vishnu Reddy
    I had a requirement to pass data between pages in my application. I have a Page A where user will input some data and on submit User will be redirected to Page B where again user will enter some more data and on submitting user will be show a confirmation in Page C after doing some calculations and data saving. Following is the idea I was trying to use: PageA.aspx: <form id="frmPageA" runat="server"> <p>Name: <asp:TextBox ID="txtName" runat="server"></asp:TextBox></p> <p>Age: <asp:TextBox ID="txtAge" runat="server"></asp:TextBox></p> <p><asp:Button ID="btnPostToPageB" runat="server" Text="Post To PageB" PostBackUrl="~/PageB.aspx" /></p> </form> In Page A Codebehind file I am creating following public properties of the inputs to access in Page B: public string Name { get { return txtName.Text.ToString(); } } public int Age { get { return Convert.ToInt32(txtAge.Text); } } In PageB.aspx: using previouspage directive to access page A public properties <%@ PreviousPageType VirtualPath="~/PageA.aspx" % <form id="frmPageB" runat="server"> <asp:HiddenField ID="hfName" runat="server" /> <asp:HiddenField ID="hfAge" runat="server" /> <p><asp:RadioButtonList ID="rblGender" runat="server" TextAlign="Right" RepeatDirection="Horizontal"> <asp:ListItem Value="Female">Female</asp:ListItem> <asp:ListItem Value="Male">Male</asp:ListItem> </asp:RadioButtonList></p> <p><asp:Button ID="btnPostToPageC" runat="server" Text="Post To PageC" PostBackUrl="~/PageC.aspx" /></p> </form> In Page B Codebehind file I am creating following public properties for the inputs to access in Page C: public RadioButtonList Gender { get { return rblGender; } } public string Name { get { return _name; } } public int Age { get { return _age; } } //checking if data is posted from Page A otherwise redirecting User to Page A protected void Page_Load(object sender, EventArgs e) { if (PreviousPage != null && PreviousPage.IsCrossPagePostBack && PreviousPage.IsValid) { hfName.Value = PreviousPage.Name; hfAge.Value = PreviousPage.Age.ToString(); } else Response.Redirect("PageA.aspx"); }` in PageC.aspx: using previouspage directive to access page A public properties <%@ PreviousPageType VirtualPath="~/PageB.aspx" % <form id="frmPageC" runat="server"> <p>Name: <asp:Label ID="lblName" runat="server"></asp:Label></p> <p>Age: <asp:Label ID="lblAge" runat="server"></asp:Label></p> <p>Gender: <asp:Label ID="lblGender" runat="server"></asp:Label></p> <p><asp:Button ID="btnBack" runat="server" Text="Back" PostBackUrl="~/PageA.aspx" /></p> </form> Page C code behind file: if (PreviousPage != null && PreviousPage.IsCrossPagePostBack && PreviousPage.IsValid) { lblName.Text = PreviousPage.Name.ToString(); lblAge.Text = PreviousPage.Age.ToString(); lblGender.Text = PreviousPage.Gender.SelectedValue.ToString(); } else Response.Redirect("PageA.aspx");

    Read the article

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • How to bind to the sum of two data bound values in WPF?

    - by Sheridan
    I have designed an analog clock control. It uses the stroke from two ellipses to represent an outer border and an inner border to the clock face. I have exposed properties in the UserControl that allow a user to alter the thickness of these two borders. The Ellipse.StrokeThickness properties are then bound to these UserControl properties. At the moment, I am binding the UserControl property for the outer border thickness to the margins of the inner elements so that they are not hidden when the border size is increased. <Ellipse Name="OuterBorder" Panel.ZIndex="1" StrokeThickness="{Binding OuterBorderThickness, ElementName=This}" Stroke="{StaticResource OuterBorderBrush}" /> <Ellipse Name="InnerBorder" Panel.ZIndex="5" StrokeThickness="{Binding InnerBorderThickness, ElementName=This}" Margin="{Binding OuterBorderThickness, ElementName=This}" Stroke="{StaticResource InnerBorderBrush}"> ... <Ellipse Name="Face" Panel.ZIndex="1" Margin="{Binding OuterBorderThickness, ElementName=This}" Fill="{StaticResource FaceBackgroundBrush}" /> ... The problem is that if the inner border thickness is increased, this does not affect the margins and so the hour ticks and numbers can become partially obscured or hidden. So what I really need is to be able to bind the margin properties of the inner controls to the sum of the inner and outer border thickness values (they are of type double). I have done this successfully using 'DataContext = this;', but am trying to rewrite the control without this as I hear it is not recommended. I also thought about using a converter and passing the second value as the ConverterParameter, but didn't know how to bind to the ConverterParameter. Any tips would be greatly appreciated. EDIT Thanks to Kent's suggestion, I've created a simple MultiConverter to add the input values and return the result. I've hooked the SAME multibinding with converter XAML to both a TextBlock.Text property and the TextBlock.Margin property to test it. <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource SumConverter}" ConverterParameter="Add"> <Binding Path="OuterBorderThickness" ElementName="This" /> <Binding Path="InnerBorderThickness" ElementName="This" /> </MultiBinding> </TextBlock.Text> <TextBlock.Margin> <MultiBinding Converter="{StaticResource SumConverter}" ConverterParameter="Add"> <Binding Path="OuterBorderThickness" ElementName="This" /> <Binding Path="InnerBorderThickness" ElementName="This" /> </MultiBinding> </TextBlock.Margin> </TextBlock> I can see the correct value displayed in the TexBlock, but the Margin is not set. Any ideas? EDIT Interestingly, the Margin property can be bound to a data property of type double, but this does not seem to apply within a MultiBinding. As advised by Kent, I changed the Converter to return the value as a Thickness object and now it works. Thanks Kent.

    Read the article

  • Using Maven to Deploy to Weblogic Clusters

    - by Mark Sailes
    org.codehaus.mojo weblogic-maven-plugin 2.9.1 We're currently using the weblogic maven plugin successfully to deploy to our local WebLogic 9.2 instances. When we try to deploy to a remote environment we have a problem. We use a two machine cluster, with the admin server and managed server on one machine, and another managed server on a seperate machine. When your plugin uploads the application to the admin server, it doesn't copy it to the second managed server on the seperate machine. This then causes the second managed server a problem, as it cannot find the application in the location where the admin server saved it on its own machine. Config below <configuration> <adminServerHostName>${weblogic.adminServerHostName}</adminServerHostName> <adminServerPort>${weblogic.adminServerPort}</adminServerPort> <adminServerProtocol>${weblogic.adminServerProtocol}</adminServerProtocol> <userId>${weblogic.userId}</userId> <password>${weblogic.password}</password> <upload>${weblogic.upload}</upload> <remote>${weblogic.remote}</remote> <verbose>${weblogic.verbose}</verbose> <debug>${weblogic.debug}</debug> <stage>${weblogic.stage}</stage> <targetNames>${weblogic.targetNames}</targetNames> <exploded>${weblogic.exploded}</exploded> </configuration> <profile> <id>localhost</id> <properties> <weblogic.adminServerHostName>localhost</weblogic.adminServerHostName> <weblogic.adminServerPort>7001</weblogic.adminServerPort> <weblogic.adminServerProtocol>t3</weblogic.adminServerProtocol> <weblogic.userId>weblogic</weblogic.userId> <weblogic.password>weblogic</weblogic.password> <weblogic.upload>false</weblogic.upload> <weblogic.remote>false</weblogic.remote> <weblogic.verbose>true</weblogic.verbose> <weblogic.debug>true</weblogic.debug> <weblogic.stage>false</weblogic.stage> <weblogic.targetNames>AdminServer</weblogic.targetNames> <weblogic.exploded>false</weblogic.exploded> </properties> </profile> <profile> <id>dev</id> <properties> <weblogic.adminServerHostName>******</weblogic.adminServerHostName> <weblogic.adminServerPort>9141</weblogic.adminServerPort> <weblogic.adminServerProtocol>t3</weblogic.adminServerProtocol> <weblogic.userId>******</weblogic.userId> <weblogic.password>******</weblogic.password> <weblogic.upload>true</weblogic.upload> <weblogic.remote>true</weblogic.remote> <weblogic.verbose>true</weblogic.verbose> <weblogic.debug>true</weblogic.debug> <weblogic.stage>true</weblogic.stage> <weblogic.targetNames>dev_cluster01</weblogic.targetNames> <weblogic.exploded>false</weblogic.exploded> </properties> </profile>

    Read the article

  • Binding Combobox to XML (wpf)

    - by mortor
    <EssenceList> <Essence GUID="464"> <Properties> <Property Name="Name"> <value>mt-1232-1. (1-1-3)</value> </Property> </Properties> <Characteristics> <Characteristic GUID="78"> <value>gadget</value> </Characteristic> <Characteristic GUID="79"> <value>measures</value> </Characteristic> </Characteristics> <LinkedEssences> <LinkType Type="ObjGroup"> <LinkedEssence GUID="369" /> </LinkType> <LinkType Type="ObjGroupProp" /> <LinkType Type="RoleObject"> <LinkedEssence GUID="5747"/> </LinkType> </LinkedEssences> </Essence> ... <Essence GUID="5747" Type="Role"> <Properties> <Property Name="Name"> <value>????-22</value> </Property> <Property Name="ShortName"> <value>UKPG-22</value> </Property> <Property Name="TagPrefix"> <value>UKPG22</value> </Property> <Property Name="useParentTagPrefix"> <value>0</value> </Property> </Properties> </Essence> ... <Essence GUID="5748" Type="Role"> </Essence> ... in example is a xml file with data from database. now i need to bind it to some fields... i use the XMLDataProvider here <Grid.DataContext> <XmlDataProvider x:Name="dataxml" XPath="EssenceList/Essence" Source="464.xml"/> </Grid.DataContext> and mostof simple texboxes i bind like <TextBox Text="{Binding XPath=/EssenceList/Essence/LinkedEssences/LinkType[1]/LinkedEssence/@GUID}" /> but now i need to bind a combobox this way: - the first Essence in the document contains LinkedEssences, that contains and - in document below there is a full description for it that contains the NAME property i need ????-22 UKPG22 0 and many other available Essences for this combobox i managed to bind the list of thems to combobox <ComboBox ItemTemplate="{StaticResource rolelistTemplate}" ItemsSource="{Binding XPath=/EssenceList/Essence[@Type]}" /> so it displays it well, but i can't bind it to my LinkedEssences.

    Read the article

  • How to send audio data from Java Applet to Rails controller

    - by cooldude
    Hi, I have to send the audio data in byte array obtain by recording from java applet at the client side to rails server at the controller in order to save. So, what encoding parameters at the applet side be used and in what form the audio data be converted like String or byte array so that rails correctly recieve data and then I can save that data at the rails in the file. As currently the audio file made by rails controller is not playing. It is the following ERROR : LAVF_header: av_open_input_stream() failed while playing with the mplayer. Here is the Java Code: package networksocket; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JApplet; import java.net.*; import java.io.*; import java.awt.event.*; import java.awt.*; import java.sql.*; import javax.swing.*; import javax.swing.border.*; import java.awt.*; import java.util.Properties; import javax.swing.plaf.basic.BasicSplitPaneUI.BasicHorizontalLayoutManager; import sun.awt.HorizBagLayout; import sun.awt.VerticalBagLayout; import sun.misc.BASE64Encoder; /** * * @author mukand */ public class Urlconnection extends JApplet implements ActionListener { /** * Initialization method that will be called after the applet is loaded * into the browser. */ public BufferedInputStream in; public BufferedOutputStream out; public String line; public FileOutputStream file; public int bytesread; public int toread=1024; byte b[]= new byte[toread]; public String f="FINISH"; public String match; public File fileopen; public JTextArea jTextArea; public Button refreshButton; public HttpURLConnection urlConn; public URL url; OutputStreamWriter wr; BufferedReader rd; @Override public void init() { // TODO start asynchronous download of heavy resources //textField= new TextField("START"); //getContentPane().add(textField); JPanel p = new JPanel(); jTextArea= new JTextArea(1500,1500); p.setLayout(new GridLayout(1,1, 1,1)); p.add(new JLabel("Server Details")); p.add(jTextArea); Container content = getContentPane(); content.setLayout(new GridBagLayout()); // Used to center the panel content.add(p); jTextArea.setLineWrap(true); refreshButton = new java.awt.Button("Refresh"); refreshButton.reshape(287,49,71,23); refreshButton.setFont(new Font("Dialog", Font.PLAIN, 12)); refreshButton.addActionListener(this); add(refreshButton); Properties properties = System.getProperties(); properties.put("http.proxyHost", "netmon.iitb.ac.in"); properties.put("http.proxyPort", "80"); } @Override public void actionPerformed(ActionEvent e) { try { url = new URL("http://localhost:3000/audio/audiorecieve"); urlConn = (HttpURLConnection)url.openConnection(); //String login = "mukandagarwal:rammstein$"; //String encodedLogin = new BASE64Encoder().encodeBuffer(login.getBytes()); //urlConn.setRequestProperty("Proxy-Authorization",login); urlConn.setRequestMethod("POST"); // urlConn.setRequestProperty("Content-Type", //"application/octet-stream"); //urlConn.setRequestProperty("Content-Type","audio/mpeg");//"application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Type","application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Length", "" + // Integer.toString(urlParameters.getBytes().length)); urlConn.setRequestProperty("Content-Language", "UTF-8"); urlConn.setDoOutput(true); urlConn.setDoInput(true); byte bread[]=new byte[2048]; int iread; char c; String data=URLEncoder.encode("key1", "UTF-8")+ "="; //String data="key1="; FileInputStream fileread= new FileInputStream("//home//mukand//Hellion.ogg");//Dogs.mp3");//Desktop//mausam1.mp3"); while((iread=fileread.read(bread))!=-1) { //data+=(new String()); /*for(int i=0;i<iread;i++) { //c=(char)bread[i]; System.out.println(bread[i]); }*/ data+= URLEncoder.encode(new String(bread,iread), "UTF-8");//new String(new String(bread));// // data+=new String(bread,iread); } //urlConn.setRequestProperty("Content-Length",Integer.toString(data.getBytes().length)); System.out.println(data); //data+=URLEncoder.encode("mukand", "UTF-8"); //data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8"); //data="key1="; wr = new OutputStreamWriter(urlConn.getOutputStream());//urlConn.getOutputStream(); //if((iread=fileread.read(bread))!=-1) // wr.write(bread,0,iread); wr.write(data); wr.flush(); fileread.close(); jTextArea.append("Send"); // Get the response rd = new BufferedReader(new InputStreamReader(urlConn.getInputStream())); while ((line = rd.readLine()) != null) { jTextArea.append(line); } wr.close(); rd.close(); //jTextArea.append("click"); } catch (MalformedURLException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void start() { } @Override public void stop() { } @Override public void destroy() { } // TODO overwrite start(), stop() and destroy() methods } Here is the Rails controller function for recieving: def audiorecieve puts "///////////////////////////////////////******RECIEVED*******////" puts params[:key1]#+" "+params[:key2] data=params[:key1] #request.env('RAW_POST_DATA') file=File.new("audiodata.ogg", 'w') file.write(data) file.flush file.close puts "////**************DONE***********//////////////////////" end Please reply quickly

    Read the article

  • Problem while running the j2me application

    - by Paru
    I am not able to view any content in the emulator while running the application. The Build is not failed and i am able run the application successfully. While i am closing the emulator i am getting an error. i can provide both code and log here. import javax.microedition.lcdui.; import javax.microedition.midlet.; import java.io.; import java.lang.; import javax.microedition.io.; import javax.microedition.rms.; public class Login extends MIDlet implements CommandListener { TextField ItemName=null; TextField ItemNo=null; TextField UserName=null; TextField Password=null; Form authForm,mainscreen; TextBox t = null; StringBuffer b = new StringBuffer(); private Display myDisplay = null; private Command okCommand = new Command("Login", Command.OK, 1); private Command exitCommand = new Command("Exit", Command.EXIT, 2); private Command sendCommand = new Command("Send", Command.OK, 1); private Command backCommand = new Command("Back", Command.BACK, 2); private Alert alert = null; public Login() { ItemName=new TextField("Item Name","",10,TextField.ANY); ItemNo=new TextField("Item No","",10,TextField.ANY); myDisplay = Display.getDisplay(this); UserName=new TextField("Your Name","",10,TextField.ANY); Password=new TextField("Password","",10,TextField.PASSWORD); authForm=new Form("Identification"); mainscreen=new Form("Logging IN"); mainscreen.addCommand(sendCommand); mainscreen.addCommand(backCommand); authForm.append(UserName); authForm.append(Password); authForm.addCommand(okCommand); authForm.addCommand(exitCommand); authForm.setCommandListener(this); myDisplay.setCurrent(authForm); } public void startApp() throws MIDletStateChangeException { } public void pauseApp() { } protected void destroyApp(boolean unconditional) throws MIDletStateChangeException { } public void commandAction(Command c, Displayable d) { if ((c == okCommand) && (d == authForm)) { if (UserName.getString().equals("") || Password.getString().equals("")){ alert = new Alert("Error", "You should enter Username and Password", null, AlertType.ERROR); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } else{ //myDisplay.setCurrent(mainscreen); login(UserName.getString(),Password.getString()); } } if ((c == backCommand) && (d == mainscreen)) { myDisplay.setCurrent(authForm); } if ((c == exitCommand) && (d == authForm)) { notifyDestroyed(); } if ((c == sendCommand) && (d == mainscreen)) { if(ItemName.getString().equals("") || ItemNo.getString().equals("")){ } else{ sendItem(ItemName.getString(),ItemNo.getString()); } } } public void login(String UserName,String PassWord) { HttpConnection connection=null; DataInputStream in=null; String url="http://olario.net/submitpost/submitpost/login.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (UserName.length()+PassWord.length())); connection.setRequestProperty("username",UserName); connection.setRequestProperty("password",PassWord); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String auth=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(auth.equals("ok")){ mainscreen.setCommandListener(this); myDisplay.setCurrent(mainscreen); } } catch(IOException x){ } } public void sendItem(String itemname,String itemno){ HttpConnection connection=null; DataInputStream in=null; String url="http://www.olario.net/submitpost/submitpost/submitPost.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (itemname.length()+itemno.length())); connection.setRequestProperty("itemCode",itemname); connection.setRequestProperty("qty",itemno); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String send=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(send.equals("added")){ alert = new Alert("Error", "Send Successfully", null, AlertType.INFO); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } } catch(IOException x){ } } } and the log is pre-init: pre-load-properties: exists.config.active: exists.netbeans.user: exists.user.properties.file: load-properties: exists.platform.active: exists.platform.configuration: exists.platform.profile: basic-init: cldc-pre-init: cldc-init: cdc-init: ricoh-pre-init: ricoh-init: semc-pre-init: semc-init: savaje-pre-init: savaje-init: sjmc-pre-init: sjmc-init: ojec-pre-init: ojec-init: cdc-hi-pre-init: cdc-hi-init: nokiaS80-pre-init: nokiaS80-init: nsicom-pre-init: nsicom-init: post-init: init: conditional-clean-init: conditional-clean: deps-jar: pre-preprocess: do-preprocess: Pre-processing 0 file(s) into /home/sreekumar/NetBeansProjects/Login/build/preprocessed directory. post-preprocess: preprocess: pre-compile: extract-libs: do-compile: post-compile: compile: pre-obfuscate: proguard-init: skip-obfuscation: proguard: post-obfuscate: obfuscate: lwuit-build: pre-preverify: do-preverify: post-preverify: preverify: pre-jar: set-password-init: set-keystore-password: set-alias-password: set-password: create-jad: add-configuration: add-profile: do-extra-libs: nokiaS80-prepare-j9: nokiaS80-prepare-manifest: nokiaS80-prepare-manifest-no-icon: nokiaS80-create-manifest: jad-jsr211-properties.check: jad-jsr211-properties: semc-build-j9: do-jar: nsicom-create-manifest: do-jar-no-manifest: update-jad: Updating application descriptor: /home/sreekumar/NetBeansProjects/Login/dist/Login.jad Generated "/home/sreekumar/NetBeansProjects/Login/dist/Login.jar" is 3501 bytes. sign-jar: ricoh-init-dalp: ricoh-add-app-icon: ricoh-build-dalp-with-icon: ricoh-build-dalp-without-icon: ricoh-build-dalp: savaje-prepare-icon: savaje-build-jnlp: post-jar: jar: pre-run: netmon.check: open-netmon: cldc-run: Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Jad URL for OTA execution: http://localhost:8082/servlet/org.netbeans.modules.mobility.project.jam.JAMServlet//home/sreekumar/NetBeansProjects/Login/dist//Login.jad Starting emulator in execution mode Running with storage root /home/sreekumar/j2mewtk/2.5.2/appdb/temp.DefaultColorPhone1 /home/sreekumar/NetBeansProjects/Login/nbproject/build-impl.xml:915: Execution failed with error code 143. BUILD FAILED (total time: 35 seconds)

    Read the article

  • These are few objective type questions which i was not able to find the solution [closed]

    - by Tarun
    1. Which of the following advantages does System.Collections.IDictionaryEnumerator provide over System.Collections.IEnumerator? a. It adds properties for direct access to both the Key and the Value b. It is optimized to handle the structure of a Dictionary. c. It provides properties to determine if the Dictionary is enumerated in Key or Value order d. It provides reverse lookup methods to distinguish a Key from a specific Value 2. When Implementing System.EnterpriseServices.ServicedComponent derived classes, which of the following statements are true? a. Enabling object pooling requires an attribute on the class and the enabling of pooling in the COM+ catalog. b. Methods can be configured to automatically mark a transaction as complete by the use of attributes. c. You can configure authentication using the AuthenticationOption when the ActivationMode is set to Library. d. You can control the lifecycle policy of an individual instance using the SetLifetimeService method. 3. Which of the following are true regarding event declaration in the code below? class Sample { event MyEventHandlerType MyEvent; } a. MyEventHandlerType must be derived from System.EventHandler or System.EventHandler<TEventArgs> b. MyEventHandlerType must take two parameters, the first of the type Object, and the second of a class derived from System.EventArgs c. MyEventHandlerType may have a non-void return type d. If MyEventHandlerType is a generic type, event declaration must use a specialization of that type. e. MyEventHandlerType cannot be declared static 4. Which of the following statements apply to developing .NET code, using .NET utilities that are available with the SDK or Visual Studio? a. Developers can create assemblies directly from the MSIL Source Code. b. Developers can examine PE header information in an assembly. c. Developers can generate XML Schemas from class definitions contained within an assembly. d. Developers can strip all meta-data from managed assemblies. e. Developers can split an assembly into multiple assemblies. 5. Which of the following characteristics do classes in the System.Drawing namespace such as Brush,Font,Pen, and Icon share? a. They encapsulate native resource and must be properly Disposed to prevent potential exhausting of resources. b. They are all MarshalByRef derived classes, but functionality across AppDomains has specific limitations. c. You can inherit from these classes to provide enhanced or customized functionality 6. Which of the following are required to be true by objects which are going to be used as keys in a System.Collections.HashTable? a. They must handle case-sensitivity identically in both the GetHashCode() and Equals() methods. b. Key objects must be immutable for the duration they are used within a HashTable. c. Get HashCode() must be overridden to provide the same result, given the same parameters, regardless of reference equalityl unless the HashTable constructor is provided with an IEqualityComparer parameter. d. Each Element in a HashTable is stored as a Key/Value pair of the type System.Collections.DictionaryElement e. All of the above 7. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. A Nullable type is a structure. c. An implicit conversion exists from any non-nullable value type to a nullable form of that type. d. An implicit conversion exists from any nullable value type to a non-nullable form of that type. e. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 8. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 9. Which of the following does using Initializer Syntax with a collection as shown below require? CollectionClass numbers = new CollectionClass { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; a. The Collection Class must implement System.Collections.Generic.ICollection<T> b. The Collection Class must implement System.Collections.Generic.IList<T> c. Each of the Items in the Initializer List will be passed to the Add<T>(T item) method d. The items in the initializer will be treated as an IEnumerable<T> and passed to the collection constructor+K110 10. What impact will using implicitly typed local variables as in the following example have? var sample = "Hello World"; a. The actual type is determined at compilation time, and has no impact on the runtime b. The actual type is determined at runtime, and late binding takes effect c. The actual type is based on the native VARIANT concept, and no binding to a specific type takes place. d. "var" itself is a specific type defined by the framework, and no special binding takes place 11. Which of the following is not supported by remoting object types? a. well-known singleton b. well-known single call c. client activated d. context-agile 12. In which of the following ways do structs differ from classes? a. Structs can not implement interfaces b. Structs cannot inherit from a base struct c. Structs cannot have events interfaces d. Structs cannot have virtual methods 13. Which of the following is not an unboxing conversion? a. void Sample1(object o) { int i = (int)o; } b. void Sample1(ValueType vt) { int i = (int)vt; } c. enum E { Hello, World} void Sample1(System.Enum et) { E e = (E) et; } d. interface I { int Value { get; set; } } void Sample1(I vt) { int i = vt.Value; } e. class C { public int Value { get; set; } } void Sample1(C vt) { int i = vt.Value; } 14. Which of the following are characteristics of the System.Threading.Timer class? a. The method provided by the TimerCallback delegate will always be invoked on the thread which created the timer. b. The thread which creates the timer must have a message processing loop (i.e. be considered a UI thread) c. The class contains protection to prevent reentrancy to the method provided by the TimerCallback delegate d. You can receive notification of an instance being Disposed by calling an overload of the Dispose method. 15. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 16. Which of the following scenarios are applicable to Window Workflow Foundation? a. Document-centric workflows b. Human workflows c. User-interface page flows d. Builtin support for communications across multiple applications and/or platforms e. All of the above 17. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 18 While using the capabilities supplied by the System.Messaging classes, which of the following are true? a. Information must be explicitly converted to/from a byte stream before it uses the MessageQueue class b. Invoking the MessageQueue.Send member defaults to using the System.Messaging.XmlMessageFormatter to serialize the object. c. Objects must be XMLSerializable in order to be transferred over a MessageQueue instance. d. The first entry in a MessageQueue must be removed from the queue before the next entry can be accessed e. Entries removed from a MessageQueue within the scope of a transaction, will be pushed back into the front of the queue if the transaction fails. 19. Which of the following are true about declarative attributes? a. They must be inherited from the System.Attribute. b. Attributes are instantiated at the same time as instances of the class to which they are applied. c. Attribute classes may be restricted to be applied only to application element types. d. By default, a given attribute may be applied multiple times to the same application element. 20. When using version 3.5 of the framework in applications which emit a dynamic code, which of the following are true? a. A Partial trust code can not emit and execute a code b. A Partial trust application must have the SecurityCriticalAttribute attribute have called Assert ReflectionEmit permission c. The generated code no more permissions than the assembly which emitted it. d. It can be executed by calling System.Reflection.Emit.DynamicMethod( string name, Type returnType, Type[] parameterTypes ) without any special permissions Within Windows Workflow Foundation, Compensating Actions are used for: a. provide a means to rollback a failed transaction b. provide a means to undo a successfully committed transaction later c. provide a means to terminate an in process transaction d. achieve load balancing by adapting to the current activity 21. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 22. Which of the following controls allows the use of XSL to transform XML content into formatted content? a. System.Web.UI.WebControls.Xml b. System.Web.UI.WebControls.Xslt c. System.Web.UI.WebControls.Substitution d. System.Web.UI.WebControls.Transform 23. To which of the following do automatic properties refer? a. You declare (explicitly or implicitly) the accessibility of the property and get and set accessors, but do not provide any implementation or backing field b. You attribute a member field so that the compiler will generate get and set accessors c. The compiler creates properties for your class based on class level attributes d. They are properties which are automatically invoked as part of the object construction process 24. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. An implicit conversion exists from any non-nullable value type to a nullable form of that type. c. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 25. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is accessible via reflection. c. The compiler generates a code that will store the information separately from the instance to ensure its security. 26. When using an implicitly typed array, which of the following is most appropriate? a. All elements in the initializer list must be of the same type. b. All elements in the initializer list must be implicitly convertible to a known type which is the actual type of at least one member in the initializer list c. All elements in the initializer list must be implicitly convertible to common type which is a base type of the items actually in the list 27. Which of the following is false about anonymous types? a. They can be derived from any reference type. b. Two anonymous types with the same named parameters in the same order declared in different classes have the same type. c. All properties of an anonymous type are read/write. 28. Which of the following are true about Extension methods. a. They can be declared either static or instance members b. They must be declared in the same assembly (but may be in different source files) c. Extension methods can be used to override existing instance methods d. Extension methods with the same signature for the same class may be declared in multiple namespaces without causing compilation errors

    Read the article

  • Xml failing to deserialise

    - by Carnotaurus
    I call a method to get my pages [see GetPages(String xmlFullFilePath)]. The FromXElement method is supposed to deserialise the LitePropertyData elements to strongly type LitePropertyData objects. Instead it fails on the following line: return (T)xmlSerializer.Deserialize(memoryStream); and gives the following error: <LitePropertyData xmlns=''> was not expected. What am I doing wrong? I have included the methods that I call and the xml data: public static T FromXElement<T>(this XElement xElement) { using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(xElement.ToString()))) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(memoryStream); } } public static List<LitePageData> GetPages(String xmlFullFilePath) { XDocument document = XDocument.Load(xmlFullFilePath); List<LitePageData> results = (from record in document.Descendants("row") select new LitePageData { Guid = IsValid(record, "Guid") ? record.Element("Guid").Value : null, ParentID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Created = Convert.ToDateTime(record.Element("Created").Value), Changed = Convert.ToDateTime(record.Element("Changed").Value), Name = record.Element("Name").Value, ID = Convert.ToInt32(record.Element("ID").Value), LitePageTypeID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Html = record.Element("Html").Value, FriendlyName = record.Element("FriendlyName").Value, Properties = record.Element("Properties") != null ? record.Element("Properties").Element("LitePropertyData").FromXElement<List<LitePropertyData>>() : new List<LitePropertyData>() }).ToList(); return results; } Here is the xml: <?xml version="1.0" encoding="utf-8"?> <root> <rows> <row> <ID>1</ID> <ImageUrl></ImageUrl> <Html>Home page</Html> <Created>01-01-2012</Created> <Changed>01-01-2012</Changed> <Name>Home page</Name> <FriendlyName>home-page</FriendlyName> </row> <row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Guid>edeaf468-f490-4271-bf4d-be145bc6a1fd</Guid> <ID>8</ID> <Name>Unused</Name> <ParentID>1</ParentID> <Created>2006-03-25T10:57:17</Created> <Changed>2012-07-17T12:24:30.0984747+01:00</Changed> <ChangedBy /> <LitePageTypeID xsi:nil="true" /> <Html> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Html> <FriendlyName>unused</FriendlyName> <IsDeleted>false</IsDeleted> <Properties> <LitePropertyData> <Description>Image for the page</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>Image Url</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>ImageUrl</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>String</Type> <Value xsi:type="xsd:string">smarter.jpg</Value> </LitePropertyData> <LitePropertyData> <Description>WebItemApplicationEnum</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>WebItemApplicationEnum</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>WebItemApplicationEnum</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>Number</Type> <Value xsi:type="xsd:string">1</Value> </LitePropertyData> </Properties> <Seo> <Author>Phil Carney</Author> <Classification /> <Copyright>Carnotaurus</Copyright> <Description> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Description> <Keywords>unused</Keywords> <Title>unused</Title> </Seo> </row> </rows> </root> EDIT Here are my entities: public class LitePropertyData { public virtual string Description { get; set; } public virtual bool DisplayEditUI { get; set; } public int OwnerTab { get; set; } public virtual string DisplayName { get; set; } public int FieldOrder { get; set; } public bool IsRequired { get; set; } public string Name { get; set; } public virtual bool IsModified { get; set; } public virtual int ParentPageID { get; set; } public LiteDataType Type { get; set; } public object Value { get; set; } } [Serializable] public class LitePageData { public String Guid { get; set; } public Int32 ID { get; set; } public String Name { get; set; } public Int32? ParentID { get; set; } public DateTime Created { get; set; } public String CreatedBy { get; set; } public DateTime Changed { get; set; } public String ChangedBy { get; set; } public Int32? LitePageTypeID { get; set; } public String Html { get; set; } public String FriendlyName { get; set; } public Boolean IsDeleted { get; set; } public List<LitePropertyData> Properties { get; set; } public LiteSeoPageData Seo { get; set; } /// <summary> /// Saves the specified XML full file path. /// </summary> /// <param name="xmlFullFilePath">The XML full file path.</param> public void Save(String xmlFullFilePath) { XDocument doc = XDocument.Load(xmlFullFilePath); XElement demoNode = this.ToXElement<LitePageData>(); demoNode.Name = "row"; doc.Descendants("rows").Single().Add(demoNode); doc.Save(xmlFullFilePath); } }

    Read the article

  • ASP.NET MVC Custom Profile Provider

    - by Ben Griswold
    It’s been a long while since I last used the ASP.NET Profile provider. It’s a shame, too, because it just works with very little development effort: Membership tables installed? Check. Profile enabled in web.config? Check. SqlProfileProvider connection string set? Check.  Profile properties defined in said web.config file? Check. Write code to set value, read value, build and test. Check. Check. Check.  Yep, I thought the built-in Profile stuff was pure gold until I noticed how the user-based information is persisted to the database. It’s stored as xml and, well, that was going to be trouble if I ever wanted to query the profile data.  So, I have avoided the super-easy-to-use ASP.NET Profile provider ever since, until this week, when I decided I could use it to store user-specific properties which I am 99% positive I’ll never need to query against ever.  I opened up my ASP.NET MVC application, completed steps 1-4 (above) in about 3 minutes, started writing my profile get/set code and that’s where the plan broke down.  Oh yeah. That’s right.  Visual Studio auto-generates a strongly-type Profile reference for web site projects but not for ASP.NET MVC or Web Applications.  Bummer. So, I went through the steps of getting a customer profile provider working in my ASP.NET MVC application: First, I defined a CurrentUser routine and my profile properties in a custom Profile class like so: using System.Web.Profile; using System.Web.Security; using Project.Core;   namespace Project.Web.Context {     public class MemberPreferencesProfile : ProfileBase     {         static public MemberPreferencesProfile CurrentUser         {             get             {                 return (MemberPreferencesProfile)                     Create(Membership.GetUser().UserName);             }         }           public Enums.PresenceViewModes? ViewMode         {             get { return ((Enums.PresenceViewModes)                     ( base["ViewMode"] ?? Enums.PresenceViewModes.Category)); }             set { base["ViewMode"] = value; Save(); }         }     } } And then I replaced the existing profile configuration web.config with the following: <profile enabled="true" defaultProvider="MvcSqlProfileProvider"          inherits="Project.Web.Context.MemberPreferencesProfile">        <providers>     <clear/>     <add name="MvcSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web,          Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices" applicationName="/"/>   </providers> </profile> Notice that profile is enabled, I’ve defined the defaultProvider and profile is now inheriting from my custom MemberPreferencesProfile class.  Finally, I am now able to set and get profile property values nearly the same way as I did with website projects: viewMode = MemberPreferencesProfile.CurrentUser.ViewMode; MemberPreferencesProfile.CurrentUser.ViewMode = viewMode;

    Read the article

  • [Windows 8] Update TextBox’s binding on TextChanged

    - by Benjamin Roux
    Since UpdateSourceTrigger is not available in WinRT we cannot update the text’s binding of a TextBox at will (or at least not easily) especially when using MVVM (I surely don’t want to write behind-code to do that in each of my apps !). Since this kind of demand is frequent (for example to disable of button if the TextBox is empty) I decided to create some attached properties to to simulate this missing behavior. namespace Indeed.Controls { public static class TextBoxEx { public static string GetRealTimeText(TextBox obj) { return (string)obj.GetValue(RealTimeTextProperty); } public static void SetRealTimeText(TextBox obj, string value) { obj.SetValue(RealTimeTextProperty, value); } public static readonly DependencyProperty RealTimeTextProperty = DependencyProperty.RegisterAttached("RealTimeText", typeof(string), typeof(TextBoxEx), null); public static bool GetIsAutoUpdate(TextBox obj) { return (bool)obj.GetValue(IsAutoUpdateProperty); } public static void SetIsAutoUpdate(TextBox obj, bool value) { obj.SetValue(IsAutoUpdateProperty, value); } public static readonly DependencyProperty IsAutoUpdateProperty = DependencyProperty.RegisterAttached("IsAutoUpdate", typeof(bool), typeof(TextBoxEx), new PropertyMetadata(false, OnIsAutoUpdateChanged)); private static void OnIsAutoUpdateChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e) { var value = (bool)e.NewValue; var textbox = (TextBox)sender; if (value) { Observable.FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>( o => textbox.TextChanged += o, o => textbox.TextChanged -= o) .Do(_ => textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text)) .Subscribe(); } } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is composed of two attached properties. The first one “RealTimeText” reflects the text in real time (updated after each TextChanged event). The second one is only used to enable the functionality. To subscribe to the TextChanged event I used Reactive Extensions (Rx-Metro package in Nuget). If you’re not familiar with this framework just replace the code with a simple: textbox.TextChanged += textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use these attached properties, it’s fairly simple <TextBox Text="{Binding Path=MyProperty, Mode=TwoWay}" ic:TextBoxEx.IsAutoUpdate="True" ic:TextBoxEx.RealTimeText="{Binding Path=MyProperty, Mode=TwoWay}" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Just make sure to create a binding (in TwoWay) for both Text and RealTimeText. Hope this helps !

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • ODI and OBIEE 11g Integration

    - by David Allan
    Here we will see some of the connectivity options to OBIEE 11g using the JDBC driver. You’ll see based upon some connection properties how the physical or presentation layers can be utilized. In the integrators guide for OBIEE 11g you will find a brief statement indicating that there actually is a JDBC driver for OBIEE. In OBIEE 11g its now possible to connect directly to the physical layer, Venkat has an informative post here on this topic. In ODI 11g the Oracle BI technology is shipped with the product along with KMs for reverse engineering, and using OBIEE models for a data source. When you install OBIEE in 11g a light weight demonstration application is preinstalled in the server, when you open this in the BI Administration tool we see the regular 3 panel view within the administration tool. To interrogate this system via JDBC (just like ODI does using the KMs) need a couple of things; the JDBC driver from OBIEE 11g, a java client program and the credentials. In my java client program I want to connect to the OBIEE system, when I connect I can interrogate what the JDBC driver presents for the metadata. The metadata projected via the JDBC connection’s DatabaseMetadata changes depending on whether the property NQ_SESSION.SELECTPHYSICAL is set when the java client connects. Let’s use the sample app to illustrate. I have a java client program here that will print out the tables in the DatabaseMetadata, it will also output the catalog and schema. For example if I execute without any special JDBC properties as follows; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass Then I get the following returned representing the presentation layer, the sample I used is XML, and has no schema; Catalog Schema Table Sample Sales Lite null Base Facts Sample Sales Lite null Calculated Facts …     Sample Targets Lite null Base Facts …     Now if I execute with the only difference being the JDBC property NQ_SESSION.SELECTPHYSICAL with the value Yes, then I see a different set of values representing the physical layer in OBIEE; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass NQ_SESSION.SELECTPHYSICAL=Yes The following is returned; Catalog Schema Table Sample App Lite Data null D01 Time Day Grain Sample App Lite Data null F10 Revenue Facts (Order grain) …     System DB (Update me)     …     If this was a database system such as Oracle, the catalog value would be the OBIEE database name and the schema would be the Oracle database schema. Other systems which have real catalog structure such as SQLServer would use its catalog value. Its this ‘Catalog’ and ‘Schema’ value that is important when integration OBIEE with ODI. For the demonstration application in OBIEE 11g, the following illustration shows how the information from OBIEE is related via the JDBC driver through to ODI. In the XML example above, within ODI’s physical schema definition on the right, we leave the schema blank since the XML data source has no schema. When I did this at first, I left the default value that ODI places in the Schema field since which was ‘<Undefined>’ (like image below) but this string is actually used in the RKM so ended up not finding any tables in this schema! Entering an empty string resolved this. Below we see a regular Oracle database example that has the database, schema, physical table structure, and how this is defined in ODI.   Remember back to the physical versus presentation layer usage when we passed the special property, well to do this in ODI, the data server has a panel for properties where you can define key/value pairs. So if you want to select physical objects from the OBIEE server, then you must set this property. An additional changed in ODI 11g is the OBIEE connection pool support, this has been implemented via a ‘Connection Pool’ flex field for the Oracle BI data server. So here you set the connection pool name from the OBIEE system that you specifically want to use and this is used by the Oracle BI to Oracle (DBLINK) LKM, so if you are using this you must set this flex field. Hopefully a useful insight into some of the mechanics of how this hangs together.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 2

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 3 Creating the Client Report Definition file (RDLC) Add a folder called “RDLC”. This will hold our RDLC report.   Right click on the RDLC folder, select “Add new item..” and add an “RDLC” name of “Products”. We will use the “Report Wizard” to walk us through the steps of creating the RDLC.   In the next dialog, give the dataset a name called “ProductDataSet”. Change the data source to “NorthwindReports.DAL” and select “ProductRepository(GetProductsProjected)”. The fields that are returned from the method are shown on the right. Click next.   Drag and drop the ProductName, CategoryName, UnitPrice and Discontinued into the Values container. Note that you can create much more complex grouping using this UI. Click Next.   Most of the selections on this screen are grayed out because we did not choose a grouping in the previous screen. Click next. Choose a style for your report. Click next. The report graphic design surface is now visible. Right click on the report and add a page header and page footer. With the report design surface active, drag and drop a TextBox from the tool box to the page header. Drag one more textbox to the page header. We will use the text boxes to add some header text as shown in the next figure. You can change the font size and other properties of the textboxes using the formatting tool bar (marked in red). You can also resize the columns by moving your cursor in between columns and dragging. Adding Expressions Add two more text boxes to the page footer. We will use these to add the time the report was generated and page numbers. Right click on the first textbox in the page footer and select “Expression”. Add the following expression for the print date (note the = sign at the left of the expression in the dialog below) "© Northwind Traders " & Format(Now(),"MM/dd/yyyy hh:mm tt") Right click on the second text box and add the following for the page count.   Globals.PageNumber & " of " & Globals.TotalPages Formatting the page footer is complete.   We are now going to format the “Unit Price” column so it displays the number in currency format.  Right click on the [UnitPrice] column (not header) and select “Text Box Properties..” Under “Number”, select “Currency”. Hit OK. Adding a chart With the design surface active, go to the toolbox and drag and drop a chart control. You will need to move the product list table down first to make space for the chart contorl. The document can also be resized by dragging on the corner or at the page header/footer separator. In the next dialog, pick the first chart type. This can be changed later if needed. Click OK. The chart gets added to the design surface.   Click on the blue bars in the chart (not legend). This will bring up drop locations for dropping the fields. Drag and drop the UnitPrice and CategoryName into the top (y axis) and bottom (x axis) as shown below. This will give us the total unit prices for a given category. That is the best I could come up with as far as what report to render, sorry :-) Delete the legend area to get more screen estate. Resize the chart to your liking. Change the header, x axis and y axis text by double clicking on those areas. We made it this far. Let’s impress the client by adding a gradient to the bar graph :-) Right click on the blue bar and select “Series properties”. Under “Fill”, add a color and secondary color and select the Gradient style. We are done designing our report. In the next section you will see how to add the report to the report viewer control, bind to the data and make it refresh when the filter criteria are changed.   Creating an ASP.NET report using Visual Studio 2010 - Part 3

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • Configuring Expert Search in Communicator 14 and SharePoint 2010

    Communicator 14 provides functionality to be able to search for contacts not just by name, but by skill.  For example a customer service agent at an airline can search for other agents with Travel Advisory experience by typing the search criteria into the Communicator search box and performing a search by keyword.  The search results will return users who have specified that skill in their profile on their SharePoint My Site.  This is actually pretty easy to configure, Ill show you how. Create Search and People Search Results Pages in SharePoint Communicator 14 Expert Search works by using the SharePoint 2010 Search Service to search SharePoint for user profiles with matching keywords.  This requires that you have an Enterprise Search site in your site collection which includes the search service and also the People Results pages.  The easiest way to do this is to create a Search Center site in your site collection. Note: I get an error when trying to create an Enterprise Search site in a Team Site in the SharePoint 2010 RTM bits, so I created it as a site collection that is evident in the URLs you see below. In the screenshots below, you can see that the URL of the SharePoint search service in the Search site collection is http://sps2010/sites/search/_vti_bin/search.asmx, and the URL of the People Search Results page is http://sps2010/sites/Search/Pages/peopleresults.aspx. Point Communications Server 14 to Search and People Search Results Pages For Communicator 14 to be able to perform an Expert Search, you need to configure Communications Server 14 to point to the Search Service and People Search Results page URLs. From a server with the OCS Core bits installed, fire up the Communications Server Management Shell and type Get-CsClientPolicy. Scroll down to the bottom of the output, were interested in setting the values of: SPSearchInternalURL SPSearchExternalURL SPSearchCenterInternalURL SPSearchCenterExternalURL SPSearchInternalURL and SPSearchExternalURL correspond to the internal and external URLs of the SharePoint search service in the Search site collection, while SPSearchCenterInternalURL and SPSearchCenterExternalURL correspond to the internal and external URLs of the people search results pages. Well use the Communications Server Management Shell to set the values of these CS policy properties. For simplicity, Im only going to set the internal URLs here. Set-CsClientPolicy SPSearchInternalURL http://sps2010/sites/search/_vti_bin/search.asmx     -SPSearchCenterInternalURL http://sps2010/sites/Search/Pages/peopleresults.aspx Log out and back into Communicator.  You can verify that these settings were applied by running the Get-CsClientPolicy cmdlet again from the Communications Server Management Shell. However, theres another super-secret ninja trick to verify that the settings were applied: Find the Communicator icon in the Windows System Tray Hold down the Ctrl button Click (left) the Communicator icon in the Windows System Tray do not depress the Ctrl button You should now see an extra menu item called Configuration Information, click it. Scroll down and locate the Expert Search URL and SharePoint Search Center URL keys and verify that their values correspond to those you set using the Set-CsClientPolicy PowerShell cmdlet. Configure a Sharepoint User Profile Import Im not going to provide detailed steps here except to say that you need to configure the SharePoint 2010 User Profile  Service Application to import user account details from Active Directory on a scheduled basis. This is a critical step because there are several user profile properties e.g. SipAddress that are only populated by a user profile import.  When performing an Expert Search, Communicator can only render results for users who have a SipAddress specified. Add Skills to User Profiles Navigate to your My Site and click on My Profile.  This page allows you to set many contact details that are searchable in SharePoint.  Were particularly interested in the Ask Me About property of a users profile.  Expert Search searches against this property to find users with matching skills. Configure a SharePoint Search Crawl Ensure that you have a scheduled job to crawl your Local SharePoint Sites content source.  Depending on how you have this configured, it will also crawl the My Site site collection and add user properties such as Ask Me About to the search index. Thats It! SharePoint 2010 provides new social and collaboration features to help users find other users with similar skills or interests. Expert Search extends this functionality directly into Microsoft Communicator 14, allowing you to interact with the users directly from the search results. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >