Search Results

Search found 10662 results on 427 pages for 'parameter passing'.

Page 123/427 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • problem in start up my RMI server(under ISP) so that it can recieve remote calls over Internet.--Jav

    - by Lokesh Kumar
    i m creating a Client/Server application in which my server and client can be on the same or on different machines but both are under ISP. My RMI programs:- -Remote Intreface:- //Calculator.java public interface Calculator extends java.rmi.Remote { public long add(long a, long b) throws java.rmi.RemoteException; public long sub(long a, long b) throws java.rmi.RemoteException; public long mul(long a, long b) throws java.rmi.RemoteException; public long div(long a, long b) throws java.rmi.RemoteException; } Remote Interface Implementation:- //CalculatorImpl.java public class CalculatorImpl extends java.rmi.server.UnicastRemoteObject implements Calculator { public CalculatorImpl() throws java.rmi.RemoteException { super(); } public long add(long a, long b) throws java.rmi.RemoteException { return a + b; } public long sub(long a, long b) throws java.rmi.RemoteException { return a - b; } public long mul(long a, long b) throws java.rmi.RemoteException { return a * b; } public long div(long a, long b) throws java.rmi.RemoteException { return a / b; } } Server:- //CalculatorServer.java import java.rmi.Naming; import java.rmi.server.RemoteServer; public class CalculatorServer { public CalculatorServer() { try { Calculator c = new CalculatorImpl(); Naming.rebind("rmi://"+args[0]+":1099/CalculatorService", c); } catch (Exception e) { System.out.println("Trouble: " + e); } } public static void main(String args[]) { new CalculatorServer(); } } Client:- //CalculatorClient.java import java.rmi.Naming; import java.rmi.RemoteException; import java.net.MalformedURLException; import java.rmi.NotBoundException; public class CalculatorClient { public static void main(String[] args) { try { Calculator c = (Calculator)Naming.lookup("rmi://"+args[0]+"/CalculatorService"); System.out.println( c.sub(4, 3) ); System.out.println( c.add(4, 5) ); System.out.println( c.mul(3, 6) ); System.out.println( c.div(9, 3) ); } catch (MalformedURLException murle) { System.out.println(); System.out.println("MalformedURLException"); System.out.println(murle); } catch (RemoteException re) { System.out.println(); System.out.println("RemoteException"); System.out.println(re); } catch (NotBoundException nbe) { System.out.println(); System.out.println("NotBoundException"); System.out.println(nbe); } catch (java.lang.ArithmeticException ae) { System.out.println(); System.out.println("java.lang.ArithmeticException"); System.out.println(ae); } } } when both Server and client programs are on same machine:- i start my server program by passing my router static IP address:-192.168.1.35 in args[0] and my server starts...fine. and by passing the same Static IP address in my Client's args[0] also works fine. but:- when both Server and client programs are on different machines:- now,i m trying to start my Server Program by passing it's public IP address:59.178.198.247 in args[0] so that it can recieve call over internet. but i am unable to start it. and the following exception occurs:- Trouble: java.rmi.ConnectException: Connection refused to host: 59.178.198.247; nested exception is: java.net.ConnectException: Connection refused: connect i think it is due to NAT Problem because i am under ISP. so,my problem is that how can i start my RMI Server under ISP so that it can recieve remote calls from internet????

    Read the article

  • Object of type 'customObject' cannot be converted to type 'customObject'.

    - by Phani Kumar PV
    i am receiving the follwing error when i am invoking a custom object "Object of type 'customObject' cannot be converted to type 'customObject'." Following is the scenario when i am getting the error i am invoking a method in a dll dynamically. Load an assembly CreateInstance.... calling MethodInfo.Invoke() passing int, string as a parameter for my method is working fine = No exceptions are thrown. But if I try and pass a one of my own custom class objects as a parameter, then I get an ArgumentException exception, and it is not either an ArgumentOutOfRangeException or ArgumentNullException. "Object of type 'customObject' cannot be converted to type 'customObject'." I am doing this in a web application. The class file containing the method is in a different proj . also the custom object is a sepearte class in the same file. there is no such thing called a static aseembly in my code. I am trying to invoke a webmethod dynamically. this webmethod is having the customObject type as an input parameter. So when i invoke the webmethod i am dynamically creating the proxy assembly and all. From the same assembly i am trying to create an instance of the cusotm object assinging the values to its properties and then passing this object as a parameter and invoking the method. everything is dynamic and nothing is created static.. :( add reference is not used. Following is a sample code i tried to create it public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } I cant find anything static being done by me. any help is greatly appreciated. Regards, Phani Kumar PV

    Read the article

  • Compiler issues on VC++ 2008 Express, Seemingly correct code throws errors.

    - by Anthony Clever
    Hi there, I've been trying to get back into coding for a while, so I figured I'd start with some simple SDL, now, without the file i/o, this compiles fine, but when I throw in the stdio code, it starts throwing errors. This I'm not sure about, I don't see any problem with the code itself, however, like I said, I might as well be a newbie, and figured I'd come here to get someone with a little more experience with this type of thing to look at it. I guess my question boils down to: "Why doesn't this compile under Microsoft's Visual C++ 2008 Express?" I've attached the error log at the bottom of the code snippet. Thanks in advance for any help. #include "SDL/SDL.h" #include "stdio.h" int main(int argc, char *argv[]) { FILE *stderr; FILE *stdout; stderr = fopen("stderr", "wb"); stdout = fopen("stdout", "wb"); SDL_Init(SDL_INIT_EVERYTHING); fprintf(stdout, "SDL INITIALIZED SUCCESSFULLY\n"); SDL_Quit(); fprintf(stderr, "SDL QUIT.\n"); fclose(stderr); fclose(stdout); return 0; } /* 1>------ Build started: Project: opengl_crap, Configuration: Debug Win32 ------ 1>Compiling... 1>main.cpp 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2090: function returns array 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2528: '__iob_func' : pointer to reference is illegal 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2556: 'FILE ***__iob_func(void)' : overloaded function differs only by return type from 'FILE *__iob_func(void)' 1> c:\program files\microsoft visual studio 9.0\vc\include\stdio.h(132) : see declaration of '__iob_func' 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(7) : error C2090: function returns array 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(7) : error C2528: '__iob_func' : pointer to reference is illegal 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(9) : error C2440: '=' : cannot convert from 'FILE *' to 'FILE ***' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(10) : error C2440: '=' : cannot convert from 'FILE *' to 'FILE ***' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(13) : error C2664: 'fprintf' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(15) : error C2664: 'fprintf' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(17) : error C2664: 'fclose' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(18) : error C2664: 'fclose' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>Build log was saved at "file://c:\Documents and Settings\Owner\My Documents\Visual Studio 2008\Projects\opengl_crap\opengl_crap\Debug\BuildLog.htm" 1>opengl_crap - 11 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== */

    Read the article

  • validating cascading dropdownlist

    - by shruti
    i am working on MVC.Net. in that i have used cascading dropdownlist. I want to do validations for blank field. the view page coding is: Select Category: <%= Html.DropDownList("Makes", ViewData["Makes"] as SelectList,"Select Category")% Select Subcategory: <%= Html.CascadingDropDownList("Models", "Makes")% the code on controller: public ActionResult AddSubCategoryPage() { var makeList = new SelectList(entityObj.Category.ToList(), "Category_id", "Category_name"); ViewData["Makes"] = makeList; // Create Models view data var modelList = new CascadingSelectList(entityObj.Subcategory1.ToList(), "Category_id", "Subcategory_id", "Subcategory_name"); ViewData["Models"] = modelList; return View("AddSubCategoryPage"); } and for that i have made one class: public static class JavaScriptExtensions { public static string CascadingDropDownList(this HtmlHelper helper, string name, string associatedDropDownList) { var sb = new StringBuilder(); // render select tag sb.AppendFormat("<select name='{0}' id='{0}'></select>", name); sb.AppendLine(); // render data array sb.AppendLine("<script type='text/javascript'>"); var data = (CascadingSelectList)helper.ViewDataContainer.ViewData[name]; var listItems = data.GetListItems(); var colArray = new List<string>(); foreach (var item in listItems) colArray.Add(String.Format("{{key:'{0}',value:'{1}',text:'{2}'}}", item.Key, item.Value, item.Text)); var jsArray = String.Join(",", colArray.ToArray()); sb.AppendFormat("$get('{0}').allOptions=[{1}];", name, jsArray); sb.AppendLine(); sb.AppendFormat("$addHandler($get('{0}'), 'change', Function.createCallback(bindDropDownList, $get('{1}')));", associatedDropDownList, name); sb.AppendLine(); sb.AppendLine("</script>"); return sb.ToString(); } } public class CascadingSelectList { private IEnumerable _items; private string _dataKeyField; private string _dataValueField; private string _dataTextField; public CascadingSelectList(IEnumerable items, string dataKeyField, string dataValueField, string dataTextField) { _items = items; _dataKeyField = dataKeyField; _dataValueField = dataValueField; _dataTextField = dataTextField; } public List<CascadingListItem> GetListItems() { var listItems = new List<CascadingListItem>(); foreach (var item in _items) { var key = DataBinder.GetPropertyValue(item, _dataKeyField).ToString(); var value = DataBinder.GetPropertyValue(item, _dataValueField).ToString(); var text = DataBinder.GetPropertyValue(item, _dataTextField).ToString(); listItems.Add(new CascadingListItem(key, value, text)); } return listItems; } } public class CascadingListItem { public CascadingListItem(string key, string value, string text) { this.Key = key; this.Value = value; this.Text = text; } public string Key { get; set; } public string Value { get; set; } public string Text { get; set; } } but when i run the aaplication it gives me following error: Server Error in '/' Application. The parameters dictionary contains a null entry for parameter 'Models' of non-nullable type 'System.Int32' for method 'System.Web.Mvc.ActionResult AddSubCategoryPage(Int32, System.String, System.String)' in 'CMS.Controllers.HomeController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Parameter name: parameters . plz help me.

    Read the article

  • Why isn't my algorithm for find the biggest and smallest inputs working?

    - by Matt Ellen
    I have started a new job, and with it comes a new language: Ironpython. Thankfully a good language :D Before starting I got to grips with Python on the whole, but that was only a week's worth of learning. Now I'm writing actual code. I've been charged with writing an algorithm that finds the best input parameter to collect data with. The basic algorithm is (as I've been instructed): Set the input parameter to a good guess Start collecting data When data is available stop collecting find the highest point If the point before this (i.e. for the previous parameter value) was higher and the point before that was lower then we've found the max otherwise the input parameter is increased by the initial guess. goto 2 If the max is found then the min needs to be found. To do this the algorithm carries on increasing the input, but by 1/10 of the max, until the current point is greater than the previous point and the point before that is also greater. Once the min is found then the algorithm stops. Currently I have a simplified data generator outputting the sin of the input, so that I know that the min value should be PI and the max value should be PI/2 The main Python code looks like this (don't worry, this is just for my edification, I don't write real code like this): import sys sys.path.append(r"F:\Programming Source\C#\PythonHelp\PythonHelp\bin\Debug") import clr clr.AddReferenceToFile("PythonHelpClasses.dll") import PythonHelpClasses from PHCStruct import Helper from System import Math helper = Helper() def run(): b = PythonHelpClasses.Executor() a = PythonHelpClasses.HasAnEvent() b.Input = 0.0 helper.__init__() def AnEventHandler(e): b.Stop() h = helper h.lastLastVal, h.lastVal, h.currentVal = h.lastVal, h.currentVal, e.Number if h.lastLastVal < h.lastVal and h.currentVal < h.lastVal and h.NotPast90: h.NotPast90 = False h.bestInput = h.lastInput inputInc = 0.0 if h.NotPast90: inputInc = Math.PI/10.0 else: inputInc = h.bestInput/10.0 if h.lastLastVal > h.lastVal and h.currentVal > h.lastVal and h.NotPast180: h.NotPast180 = False if h.NotPast180: h.lastInput, b.Input = b.Input, b.Input + inputInc b.Start(a) else: print "Best input:", h.bestInput print "Last input:", h.lastInput b.Stop() a.AnEvent += AnEventHandler b.Start(a) PHCStruct.py: class Helper(): def __init__(self): self.currentVal = 0 self.lastVal = 0 self.lastLastVal = 0 self.NotPast90 = True self.NotPast180 = True self.bestInput = 0 self.lastInput = 0 PythonHelpClasses has two small classes I wrote in C# before I realised how to do it in Ironpython. Executor runs a delegate asynchronously while it's running member is true. The important code: public void Start(HasAnEvent hae) { running = true; RunDelegate r = new RunDelegate(hae.UpdateNumber); AsyncCallback ac = new AsyncCallback(UpdateDone); IAsyncResult ar = r.BeginInvoke(Input, ac, null); } public void Stop() { running = false; } public void UpdateDone(IAsyncResult ar) { RunDelegate r = (RunDelegate)((AsyncResult)ar).AsyncDelegate; r.EndInvoke(ar); if (running) { AsyncCallback ac = new AsyncCallback(UpdateDone); IAsyncResult ar2 = r.BeginInvoke(Input, ac, null); } } HasAnEvent has a function that generates the sin of its input and fires an event with that result as its argument. i.e.: public void UpdateNumber(double val) { AnEventArgs e = new AnEventArgs(Math.Sin(val)); System.Threading.Thread.Sleep(1000); if (null != AnEvent) { AnEvent(e); } } The sleep is in there just to slow things down a bit. The problem I am getting is that the algorithm is not coming up with the best input being PI/2 and the final input being PI, but I can't see why. Also the best and final inputs are different each time I run the programme. Can anyone see why? Also when the algorithm terminates the best and final inputs are printed to the screen multiple times, not just once. Can someone explain why?

    Read the article

  • Entity with Guid ID is not inserted by NHibernate

    - by DanK
    I am experimenting with NHibernate (version 2.1.0.4000) with Fluent NHibernate Automapping. My test set of entities persists fine with default integer IDs I am now trying to use Guid IDs with the entities. Unfortunately changing the Id property to a Guid seems to stop NHibernate inserting objects. Here is the entity class: public class User { public virtual int Id { get; private set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual string Email { get; set; } public virtual string Password { get; set; } public virtual List<UserGroup> Groups { get; set; } } And here is the Fluent NHibernate configuration I am using: SessionFactory = Fluently.Configure() //.Database(SQLiteConfiguration.Standard.InMemory) .Database(MsSqlConfiguration.MsSql2008.ConnectionString(@"Data Source=.\SQLEXPRESS;Initial Catalog=NHibernateTest;Uid=NHibernateTest;Password=password").ShowSql()) .Mappings(m => m.AutoMappings.Add( AutoMap.AssemblyOf<TestEntities.User>() .UseOverridesFromAssemblyOf<UserGroupMappingOverride>())) .ExposeConfiguration(x => { x.SetProperty("current_session_context_class","web"); }) .ExposeConfiguration(Cfg => _configuration = Cfg) .BuildSessionFactory(); Here is the log output when using an integer ID: 16:23:14.287 [4] DEBUG NHibernate.Event.Default.DefaultSaveOrUpdateEventListener - saving transient instance 16:23:14.291 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - saving [TestEntities.User#<null>] 16:23:14.299 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - executing insertions 16:23:14.309 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - executing identity-insert immediately 16:23:14.313 [4] DEBUG NHibernate.Persister.Entity.AbstractEntityPersister - Inserting entity: TestEntities.User (native id) 16:23:14.321 [4] DEBUG NHibernate.AdoNet.AbstractBatcher - Opened new IDbCommand, open IDbCommands: 1 16:23:14.321 [4] DEBUG NHibernate.AdoNet.AbstractBatcher - Building an IDbCommand object for the SqlString: INSERT INTO [User] (FirstName, LastName, Email, Password) VALUES (?, ?, ?, ?); select SCOPE_IDENTITY() 16:23:14.322 [4] DEBUG NHibernate.Persister.Entity.AbstractEntityPersister - Dehydrating entity: [TestEntities.User#<null>] 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding null to parameter: 0 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding null to parameter: 1 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding 'ertr' to parameter: 2 16:23:14.324 [4] DEBUG NHibernate.Type.StringType - binding 'tretret' to parameter: 3 16:23:14.329 [4] DEBUG NHibernate.SQL - INSERT INTO [User] (FirstName, LastName, Email, Password) VALUES (@p0, @p1, @p2, @p3); select SCOPE_IDENTITY();@p0 = NULL, @p1 = NULL, @p2 = 'ertr', @p3 = 'tretret' and here is the output when using a Guid: 16:50:14.008 [4] DEBUG NHibernate.Event.Default.DefaultSaveOrUpdateEventListener - saving transient instance 16:50:14.012 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - generated identifier: d74e1bd3-1c01-46c8-996c-9d370115780d, using strategy: NHibernate.Id.GuidCombGenerator 16:50:14.013 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - saving [TestEntities.User#d74e1bd3-1c01-46c8-996c-9d370115780d] This is where it silently fails, with no exception thrown or further log entries. It looks like it is generating the Guid ID correctly for the new object, but is just not getting any further than that. Is there something I need to do differently in order to use Guid IDs? Thanks, Dan.

    Read the article

  • Disabling checkboxes based on selection of another checkbox in jquery

    - by Prady
    Hi, I want to disable a set of checkbox based on selection of one textbox and enable the disabled ones if the checkbox is unchecked. In the code below. If someone checks the checkbox for project cost under change this parameter then checkbox for project cost under Generate simulated value for this param should be disabled and all the checkboxes under change this parameter should be disabled except for checked one. Similarly this should be done each parameter like Project cost,avg hours,Project completion date, hourly rate etc. One way i could think of was of on the click function disable each checkbox by the id. Is there a better way of doing it? <table> <tr> <td></td> <td></td> <td>Change this parameter</td> <td>Generate simulated value for this param</td> </tr> <tr> <td>Project cost</td> <td><input type ="text" id ="pc"/></td> <td><input class="change" type="checkbox" name="chkBox" id="chkBox"></input></td> <td><input class="sim" type="checkbox" name="chkBox1" id="chkBox1"></input></td> </tr> <tr> <td>Avg hours</td> <td><input type ="text" id ="avghrs"/></td> <td><input class="change" type="checkbox" name="chkBoxa" id="chkBoxa"></input></td> <td><input class="sim" type="checkbox" name="chkBox1a" id="chkBox1a"></input></td> </tr> <tr> <td>Project completion date</td> <td><input type ="text" id ="cd"/></td> <td><input class="change" type="checkbox" name="chkBoxb" id="chkBoxb"></input></td> <td><input class="sim" type="checkbox" name="chkBox1b" id="chkBox1b"></input></td> </tr> <tr> <td>Hourly rate</td> <td><input type ="text" id ="hr"/></td> <td><input class="change" type="checkbox" name="chkBoxc" id="chkBoxc"></input></td> <td><input class="sim" type="checkbox" name="chkBox1c" id="chkBox1c"></input></td> </tr> </table> Thanks Prady

    Read the article

  • insert/update/delete with xml in .net 3.5

    - by Radhi
    Hello guys, i have "UserProfile" in my site which we have stored in xml fromat of xml is as below <UserProfile xmlns=""> <BasicInfo> <Title value="Basic Details" /> <Fields> <UserId title="UserId" display="No" right="Public" value="12" /> <EmailAddress title="Email Address" display="Yes" right="Public" value="[email protected]" /> <FirstName title="First Name" display="Yes" right="Public" value="Radhi" /> <LastName title="Last Name" display="Yes" right="Public" value="Patel" /> <DisplayName title="Display Name" display="Yes" right="Public" value="Radhi Patel" /> <RegistrationStatusId title="RegistrationStatusId" display="No" right="Public" value="10" /> <RegistrationStatus title="Registration Status" display="Yes" right="Private" value="Registered" /> <CountryName title="Country" display="Yes" right="Public" value="India" /> <StateName title="State" display="Yes" right="Public" value="Maharashtra" /> <CityName title="City" display="Yes" right="Public" value="Mumbai" /> <Gender title="Gender" display="Yes" right="Public" value="FeMale" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="0" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Nov 27 2009 3:08PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="Feb 18 2010 1:43PM " /> <LogInStatusId title="LogInStatusId" display="No" right="Public" value="1" /> <LogInStatus title="LogIn Status" display="Yes" right="Private" value="Free" /> <ProfileImagePath title="Profile Pic" display="No" right="Public" value="~/Images/96.jpg" /> </Fields> </BasicInfo> <PersonalInfo> <Title value="Personal Details" /> <Fields> <Nickname title="Nick Name" display="Yes" right="Public" value="Rahul" /> <NativeLocation title="Native" display="Yes" right="Public" value="Amreli" /> <DateofAnniversary title="Anniversary Dt." display="Yes" right="Public" value="12/29/2008" /> <BloodGroupId title="BloodGroupId" display="No" right="Public" value="25" /> <BloodGroupName title="Blood Group" display="Yes" right="Public" value="25" /> <MaritalStatusId title="MaritalStatusId" display="No" right="Public" value="34" /> <MaritalStatusName title="Marital status" display="Yes" right="Private" value="34" /> <DateofDeath title="Death dt" display="Yes" right="Private" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Jan 6 2010 2:59PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="3/10/2010 5:34:14 PM" /> </Fields> </PersonalInfo> <FamilyInfo> <Title value="Family Details" /> <Fields> <GallantryHistory title="Gallantry History" display="Yes" right="Public" value="Gallantry history rahul" /> <Ethinicity title="Ethinicity" display="Yes" right="Public" value="ethnicity rahul" /> <KulDev title="KulDev" display="Yes" right="Public" value="Krishna" /> <KulDevi title="KulDevi" display="Yes" right="Public" value="Khodiyar" /> <Caste title="Caste" display="Yes" right="Private" value="Brhamin" /> <SunSignId title="SunSignId" display="No" right="Public" value="20" /> <SunSignName title="SunSignName" display="Yes" right="Public" value="Scorpio" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Dec 29 2009 4:59PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="3/10/2010 6:29:56 PM" /> </Fields> </FamilyInfo> <HobbyInfo> <Title value="Hobbies/Interests" /> <Fields> <AbountMe title="Abount Me" display="Yes" right="Public" value="Naughty.... " /> <Hobbies title="Hobbies" display="Yes" right="Public" value="Dance, Music, decoration, Shopping" /> <Food title="Food" display="Yes" right="Public" value="Maxican salsa, Pizza, Khoya kaju " /> <Movies title="Movies" display="Yes" right="Public" value="day after tommorrow. wake up sid. avatar" /> <Music title="Music" display="Yes" right="Public" value="Chu kar mere man ko... wake up sid songs, slow music, apgk songs" /> <TVShows title="TV Shows" display="Yes" right="Public" value="business bazzigar, hanah montana" /> <Books title="Books" display="Yes" right="Public" value="mystry novels" /> <Sports title="Sports" display="Yes" right="Public" value="Badminton" /> <Will title="Will" display="Yes" right="Public" value="do photography, to have my own super home... and i can decorate it like anything..." /> <FavouriteQuotes title="Favourite Quotes" display="Yes" right="Public" value="Live like a king nothing lasts forever. not even your troubles smooth sea do not makes skillfull sailors" /> <CremationPrefernces title="Cremation Prefernces" display="Yes" right="Public" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Feb 24 2010 2:13PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="Mar 2 2010 4:34PM " /> </Fields> </HobbyInfo> <PermenantAddr> <Title value="Permenant Address" /> <Fields> <Address title="Address" display="Yes" right="Public" value="Test Entry" /> <CityId title="CityId" display="No" right="Public" value="93" /> <CityName title="City" display="Yes" right="Public" value="Chennai" /> <StateId title="StateId" display="No" right="Public" value="89" /> <StateName title="State" display="Yes" right="Public" value="Tamil Nadu" /> <CountryId title="CountryId" display="No" right="Public" value="108" /> <CountryName title="Country" display="Yes" right="Public" value="India" /> <ZipCode title="ZipCode" display="Yes" right="Private" value="360019" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Jan 6 2010 1:29PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="0" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value=" " /> </Fields> </PermenantAddr> <PresentAddr> <Title value="Present Address" /> <Fields> <Address title="Ethinicity" display="Yes" right="Public" value="" /> <CityId title="CityId" display="No" right="Public" value="" /> <CityName title="City" display="Yes" right="Public" value="" /> <StateId title="StateId" display="No" right="Public" value="" /> <StateName title="State" display="Yes" right="Public" value="" /> <CountryId title="CountryId" display="No" right="Public" value="" /> <CountryName title="Country" display="Yes" right="Public" value="" /> <ZipCode title="ZipCode" display="Yes" right="Public" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="" /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="" /> </Fields> </PresentAddr> <ContactInfo> <Title value="Contact Details" /> <Fields> <DayPhoneNo title="Day Phone" display="Yes" right="Public" value="" /> <NightPhoneNo title="Night Phone" display="Yes" right="Public" value="" /> <MobileNo title="Mobile No" display="Yes" right="Private" value="" /> <FaxNo title="Fax No" display="Yes" right="CUG" value="" /> <CreatedBy title="CreatedBy" display="No" right="Public" value="12" /> <CreatedOn title="CreatedOn" display="No" right="Public" value="Jan 5 2010 12:37PM " /> <ModifiedBy title="ModifiedBy" display="No" right="Public" value="12" /> <ModifiedOn title="ModifiedOn" display="No" right="Public" value="Feb 17 2010 1:37PM " /> </Fields> </ContactInfo> <EmailInfo> <Title value="Alternate Email Addresses" /> <Fields /> </EmailInfo> <AcademicInfo> <Title value="Education Details" /> <Fields> <Record right="Public"> <Education title="Education" display="Yes" value="Full Attendance" /> <Institute title="Institute" display="Yes" value="Attendance" /> <PassingYear title="Passing Year" display="Yes" value="2000" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 12:41PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 12:41PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="D.C.E." /> <Institute title="Institute" display="Yes" value="G.P.G" /> <PassingYear title="Passing Year" display="Yes" value="2005" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 12:45PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 12:45PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="MCSE" /> <Institute title="Institute" display="Yes" value="MCSE" /> <PassingYear title="Passing Year" display="Yes" value="2009" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 6:12PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Mar 2 2010 4:33PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="H.S.C." /> <Institute title="Institute" display="Yes" value="G.H.S.E.B." /> <PassingYear title="Passing Year" display="Yes" value="2002" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 6:17PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 6:17PM " /> </Record> <Record right="Public"> <Education title="Education" display="Yes" value="S.S.C." /> <Institute title="Institute" display="Yes" value="G.S.E.B." /> <PassingYear title="Passing Year" display="Yes" value="2000" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 6:17PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 6:17PM " /> </Record> </Fields> </AcademicInfo> <AchievementInfo> <Title value="Achievement Details" /> <Fields> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry" /> <Tournament title="Tournament" display="Yes" value="Test Entry" /> <AwardDescription title="Description" display="Yes" value="Test Entry" /> <AwardYear title="Award Year" display="Yes" value="2002" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 3:51PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 3:51PM " /> </Record> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry" /> <Tournament title="Tournament" display="Yes" value="Test Entry" /> <AwardDescription title="Description" display="Yes" value="Test Entry" /> <AwardYear title="Award Year" display="Yes" value="2005" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 8 2010 10:19AM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 8 2010 10:19AM " /> </Record> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry3" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry3" /> <Tournament title="Tournament" display="Yes" value="Test Entry3" /> <AwardDescription title="Description" display="Yes" value="Test Entry3" /> <AwardYear title="Award Year" display="Yes" value="2007" /> <IsActive title="IsActive" display="No" value="true" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 11:47AM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 11:47AM " /> </Record> <Record right="Public"> <Awards title="Award" display="Yes" value="Test Entry4" /> <FieldOfAward title="Field Of Award" display="Yes" value="Test Entry4" /> <Tournament title="Tournament" display="Yes" value="Test Entry4" /> <AwardDescription title="Description" display="Yes" value="Test Entry3" /> <AwardYear title="Award Year" display="Yes" value="2000" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Dec 31 2009 11:47AM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Dec 31 2009 11:47AM " /> </Record> </Fields> </AchievementInfo> <ProfessionalInfo> <Title value="Professional Details" /> <Fields> <Record right="Public"> <Occupation title="Occupation" display="Yes" value="Test Entry" /> <Organization title="Organization" display="Yes" value="Test Entry" /> <ProjectsDescription title="Description" display="Yes" value="Test Entry" /> <Duration title="Duration" display="Yes" value="26" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 4 2010 3:01PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 4 2010 3:01PM " /> </Record> <Record right="Public"> <Occupation title="Occupation" display="Yes" value="Test Entry" /> <Organization title="Organization" display="Yes" value="Test Entry" /> <ProjectsDescription title="Description" display="Yes" value="Test Entry" /> <Duration title="Duration" display="Yes" value="10" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 4 2010 3:01PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 4 2010 3:01PM " /> </Record> <Record right="Public"> <Occupation title="Occupation" display="Yes" value="Test Entry" /> <Organization title="Organization" display="Yes" value="Test Entry" /> <ProjectsDescription title="Description" display="Yes" value="Test Entry" /> <Duration title="Duration" display="Yes" value="15" /> <IsActive title="IsActive" display="No" value="false" /> <CreatedBy title="CreatedBy" display="No" value="12" /> <CreatedOn title="CreatedOn" display="No" value="Jan 4 2010 3:01PM " /> <ModifiedBy title="ModifiedBy" display="No" value="12" /> <ModifiedOn title="ModifiedOn" display="No" value="Jan 4 2010 3:01PM " /> </Record> </Fields> </ProfessionalInfo> </UserProfile> now for tags like PersonalInfo,contactInfo,Address there will be only one record, but for tags like "OtherInfo","academicInfo","ProfessionalInfo" there will be multiple records so in xml there are tags for that. now to edit tags having one record only for one user i did coding like: private void FillFamilyInfoControls(XElement rootElement) { ddlSunsign.DataBind(); XElement parentElement; string xPathQuery = "FamilyInfo/Fields"; parentElement = rootElement.XPathSelectElement(xPathQuery); txtGallantryHistory.Text = parentElement.Element("GallantryHistory").Attribute("value").Value; txtEthinicity.Text = parentElement.Element("Ethinicity").Attribute("value").Value; txtKulDev.Text = parentElement.Element("KulDev").Attribute("value").Value; txtKulDevi.Text = parentElement.Element("KulDevi").Attribute("value").Value; txtCaste.Text = parentElement.Element("Caste").Attribute("value").Value; ddlSunsign.SelectedValue = parentElement.Element("SunSignId").Attribute("value").Value; ddlSunsign.SelectedItem.Text = parentElement.Element("SunSignName").Attribute("value").Value; } private XElement UpdateFamilyInfoXML(XElement rootElement) { XElement parentElement; string xPathQuery = "FamilyInfo/Fields"; parentElement = rootElement.XPathSelectElement(xPathQuery); parentElement.Element("GallantryHistory").Attribute("value").Value = txtGallantryHistory.Text; parentElement.Element("Ethinicity").Attribute("value").Value = txtEthinicity.Text; parentElement.Element("KulDev").Attribute("value").Value = txtKulDev.Text; parentElement.Element("KulDevi").Attribute("value").Value = txtKulDevi.Text; parentElement.Element("Caste").Attribute("value").Value = txtCaste.Text; parentElement.Element("SunSignId").Attribute("value").Value = ddlSunsign.SelectedItem.Value; parentElement.Element("SunSignName").Attribute("value").Value = ddlSunsign.SelectedItem.Text; parentElement.Element("ModifiedBy").Attribute("value").Value = UMSession.CurrentLoggedInUser.UserId.ToString(); parentElement.Element("ModifiedOn").Attribute("value").Value = System.DateTime.Now.ToString(); return rootElement; //rootElement.XPathSelectElement(xPathQuery) = parentElement; } these 2 functions i have used to update xml and to get data from xml and fill into controls. but for the tags where there are multiple records... i am not able to find any solution /control using which i an do it easily. coz my field's value is in attribute named "Value" when in the examples i saw its between opening and closing tag.. i have 2 methods to do so. 1 . i make one page to edit a single record and pass id of record to that page on editing. open the page in iframe on same page to edit and in page_load get data from database for id passed and fill it in control. 2 . i store xml on server as physical file and use it in page i opened in iframe to edit the record. so i am confused... can anybody please guide me that what shpould i do to let user provide interface to edit this xml data/user profile ?

    Read the article

  • Installing Matlab on ubuntu 12.04 32 bits

    - by Amir
    I have been trying to install Matlab2012a, matlab2012b and Matlab2013a for like 4 hours, triedto fix my prospective errors regarding the posts 2012a, Ubuntu-Matlab Documentation and Matlab-central. But either i am recieving an error while the installation GUI pops-up with the error : The application encountered an unexpected error and needs to close. You may want to try re-installing your product(s). More information can be found at /tmp/mathworks_amir.log On the other hand for 2012a. and the errors for 2012b and 2013a is : `Installing ... Exception in thread "main" com.google.inject.ProvisionException: Guice provision errors: 1) Error in custom provider, java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at com.mathworks.wizard.WizardModule.provideDisplayProperties(WizardModule.java:60) while locating com.mathworks.instutil.DisplayProperties at com.mathworks.wizard.ui.components.ComponentsModule.providePaintStrategy(ComponentsModule.java:76) while locating com.mathworks.wizard.ui.components.PaintStrategy for parameter 4 at com.mathworks.wizard.ui.components.SwingComponentFactoryImpl.(SwingComponentFactoryImpl.java:110) while locating com.mathworks.wizard.ui.components.SwingComponentFactoryImpl while locating com.mathworks.wizard.ui.components.SwingComponentFactory for parameter 1 at com.mathworks.wizard.ui.WizardUIImpl.(WizardUIImpl.java:65) while locating com.mathworks.wizard.ui.WizardUIImpl while locating com.mathworks.wizard.ui.WizardUI annotated with @com.google.inject.name.Named(value=BaseWizardUI) at com.mathworks.wizard.ui.UIModule.provideWizardUI(UIModule.java:50) while locating com.mathworks.wizard.ui.WizardUI for parameter 0 at com.mathworks.wizard.ExceptionHandlerImpl.(ExceptionHandlerImpl.java:22) while locating com.mathworks.wizard.ExceptionHandlerImpl while locating com.mathworks.wizard.ExceptionHandler 1 error at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:767) at com.google.inject.InjectorImpl.getInstance(InjectorImpl.java:793) at com.mathworks.wizard.WizardLauncher.startWizard(WizardLauncher.java:160) at com.mathworks.wizard.WizardLauncher.start(WizardLauncher.java:75) at com.mathworks.wizard.AbstractLauncher.launch(AbstractLauncher.java:27) at com.mathworks.wizard.AbstractLauncher.launchStandalone(AbstractLauncher.java:18) at com.mathworks.professionalinstaller.Launcher.main(Launcher.java:21) Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:106) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:95) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:95) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:804) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) ... 6 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:101) ... 54 more Caused by: com.mathworks.instutil.JNIException: java.lang.UnsatisfiedLinkError: Can't load library: /tmp/mathworks_7417/bin/glnxa64/libinstutil.so at com.mathworks.instutil.NativeUtility.loadNativeLibrary(NativeUtility.java:39) at com.mathworks.instutil.NativeUtility.(NativeUtility.java:24) at com.mathworks.instutil.DisplayPropertiesImpl.(DisplayPropertiesImpl.java:10) at com.mathworks.wizard.WizardModule.provideDisplayProperties(WizardModule.java:67) ... 59 more Caused by: java.lang.UnsatisfiedLinkError: Can't load library: /tmp/mathworks_7417/bin/glnxa64/libinstutil.so at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1842) at java.lang.Runtime.load0(Runtime.java:795) at java.lang.System.load(System.java:1061) at com.mathworks.instutil.NativeUtility.loadNativeLibrary(NativeUtility.java:37) ... 62 more Finished ` I have tried to 1- re-install java run-time 6 and then 7. 2- pass the java-path to the install with : -javadir 3- use the force to install on 32 bits as : sh install -glnx86 -v -javadir /usr/lib/jvm/java-7-openjdk-i386/jre But it seems none of them have worked so far. any ideas ??

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • It&rsquo;s ok to throw System.Exception&hellip;

    - by Chris Skardon
    No. No it’s not. It’s not just me saying that, it’s the Microsoft guidelines: http://msdn.microsoft.com/en-us/library/ms229007.aspx  Do not throw System.Exception or System.SystemException. Also – as important: Do not catch System.Exception or System.SystemException in framework code, unless you intend to re-throw.. Throwing: Always, always try to pick the most specific exception type you can, if the parameter you have received in your method is null, throw an ArgumentNullException, value received greater than expected? ArgumentOutOfRangeException. For example: public void ArgChecker(int theInt, string theString) { if (theInt < 0) throw new ArgumentOutOfRangeException("theInt", theInt, "theInt needs to be greater than zero."); if (theString == null) throw new ArgumentNullException("theString"); if (theString.Length == 0) throw new ArgumentException("theString needs to have content.", "theString"); } Why do we want to do this? It’s a lot of extra code when compared with a simple: public void ArgChecker(int theInt, string theString) { if (theInt < 0 || string.IsNullOrWhiteSpace(theString)) throw new Exception("The parameters were invalid."); } It all comes down to a couple of things; the catching of the exceptions, and the information you are passing back to the calling code. Catching: Ok, so let’s go with introduction level Exception handling, taught by many-a-university: You do all your work in a try clause, and catch anything wrong in the catch clause. So this tends to give us code like this: try { /* All the shizzle */ } catch { /* Deal with errors */ } But of course, we can improve on that by catching the exception so we can report on it: try { } catch(Exception ex) { /* Log that 'ex' occurred? */ } Now we’re at the point where people tend to go: Brilliant, I’ve got exception handling nailed, what next??? and code gets littered with the catch(Exception ex) nastiness. Why is it nasty? Let’s imagine for a moment our code is throwing an ArgumentNullException which we’re catching in the catch block and logging. Ok, the log entry has been made, so we can debug the code right? We’ve got all the info… What about an OutOfMemoryException – what can we do with that? That’s right, not a lot, chances are you can’t even log it (you are out of memory after all), but you’ve caught it – and as such - have hidden it. So, as part of this, there are two things you can do one, is the rethrow method: try { /* code */ } catch (Exception ex) { //Log throw; } Note, it’s not catch (Exception ex) { throw ex; } as that will wipe all your important stack trace information. This does get your exception to continue, and is the only reason you would catch Exception (anywhere other than a global catch-all) in your code. The other preferred method is to catch the exceptions you can deal with. It may not matter that the string I’m passing in is null, and I can cope with it like this: try{ DoSomething(myString); } catch(ArgumentNullException){} And that’s fine, it means that any exceptions I can’t deal with (OutOfMemory for example) will be propagated out to other code that can deal with it. Of course, this is horribly messy, no one wants try / catch blocks everywhere and that’s why Microsoft added the ‘Try’ methods to the framework, and it’s a strategy we should continue. If I try: int i = (int) "one"; I will get an InvalidCastException which means I need the try / catch block, but I could mitigate this using the ‘TryParse’ method: int i; if(!Int32.TryParse("one", out i)) return; Similarly, in the ‘DoSomething’ example, it might be beneficial to have a ‘TryDoSomething’ that returns a boolean value indicating the success of continuing. Obviously this isn’t practical in every case, so use the ol’ common sense approach. Onwards Yer thanks Chris, I’m looking forward to writing tonnes of new code. Fear not, that is where helpers come into it… (but that’s the next post)

    Read the article

  • Getting a 'base' Domain from a Domain

    - by Rick Strahl
    Here's a simple one: How do you reliably get the base domain from full domain name or URI? Specifically I've run into this scenario in a few recent applications when creating the Forms Auth Cookie in my ASP.NET applications where I explicitly need to force the domain name to the common base domain. So, www.west-wind.com, store.west-wind.com, west-wind.com, dev.west-wind.com all should return west-wind.com. Here's the code where I need to use this type of logic for issuing an AuthTicket explicitly:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); // write out a domain cookie cookie.Domain = Request.Url.GetBaseDomain(); HttpContext.Response.Cookies.Add(cookie); } Now unfortunately there's no Uri.GetBaseDomain() method unfortunately, as I was surprised to find out. So I ended up creating one:public static class NetworkUtils { /// <summary> /// Retrieves a base domain name from a full domain name. /// For example: www.west-wind.com produces west-wind.com /// </summary> /// <param name="domainName">Dns Domain name as a string</param> /// <returns></returns> public static string GetBaseDomain(string domainName) { var tokens = domainName.Split('.'); // only split 3 segments like www.west-wind.com if (tokens == null || tokens.Length != 3) return domainName; var tok = new List<string>(tokens); var remove = tokens.Length - 2; tok.RemoveRange(0, remove); return tok[0] + "." + tok[1]; ; } /// <summary> /// Returns the base domain from a domain name /// Example: http://www.west-wind.com returns west-wind.com /// </summary> /// <param name="uri"></param> /// <returns></returns> public static string GetBaseDomain(this Uri uri) { if (uri.HostNameType == UriHostNameType.Dns) return GetBaseDomain(uri.DnsSafeHost); return uri.Host; } } I've had a need for this so frequently it warranted a couple of helpers. The second Uri helper is an Extension method to the Uri class, which is what's used the in the first code sample. This is the preferred way to call this since the URI class can differentiate between Dns names and IP Addresses. If you use the first string based version there's a little more guessing going on if a URL is an IP Address. There are a couple of small twists in dealing with 'domain names'. When passing a string only there's a possibility to not actually pass domain name, but end up passing an IP address, so the code explicitly checks for three domain segments (can there be more than 3?). IP4 Addresses have 4 and IP6 have none so they'll fall through. Then there are things like localhost or a NetBios machine name which also come back on URL strings, but also shouldn't be handled. Anyway, small thing but maybe somebody else will find this useful.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Networking   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C# 4.0: COM Interop Improvements

    - by Paulo Morgado
    Dynamic resolution as well as named and optional arguments greatly improve the experience of interoperating with COM APIs such as Office Automation Primary Interop Assemblies (PIAs). But, in order to alleviate even more COM Interop development, a few COM-specific features were also added to C# 4.0. Ommiting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. These parameters are typically not meant to mutate a passed-in argument, but are simply another way of passing value parameters. Specifically for COM methods, the compiler allows to declare the method call passing the arguments by value and will automatically generate the necessary temporary variables to hold the values in order to pass them by reference and will discard their values after the call returns. From the point of view of the programmer, the arguments are being passed by value. This method call: object fileName = "Test.docx"; object missing = Missing.Value; document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); can now be written like this: document.SaveAs("Test.docx", Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value); And because all parameters that are receiving the Missing.Value value have that value as its default value, the declaration of the method call can even be reduced to this: document.SaveAs("Test.docx"); Dynamic Import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object form the context of the call, but has to explicitly perform a cast on the returned values to make use of that knowledge. These casts are so common that they constitute a major nuisance. To make the developer’s life easier, it is now possible to import the COM APIs in such a way that variants are instead represented using the type dynamic which means that COM signatures have now occurrences of dynamic instead of object. This means that members of a returned object can now be easily accessed or assigned into a strongly typed variable without having to cast. Instead of this code: ((Excel.Range)(excel.Cells[1, 1])).Value2 = "Hello World!"; this code can now be used: excel.Cells[1, 1] = "Hello World!"; And instead of this: Excel.Range range = (Excel.Range)(excel.Cells[1, 1]); this can be used: Excel.Range range = excel.Cells[1, 1]; Indexed And Default Properties A few COM interface features are still not available in C#. On the top of the list are indexed properties and default properties. As mentioned above, these will be possible if the COM interface is accessed dynamically, but will not be recognized by statically typed C# code. No PIAs – Type Equivalence And Type Embedding For assemblies indentified with PrimaryInteropAssemblyAttribute, the compiler will create equivalent types (interfaces, structs, enumerations and delegates) and embed them in the generated assembly. To reduce the final size of the generated assembly, only the used types and their used members will be generated and embedded. Although this makes development and deployment of applications using the COM components easier because there’s no need to deploy the PIAs, COM component developers are still required to build the PIAs.

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • Linq To SQL: Behaviour for table field which is NotNull and having Default value or binding

    - by kaushalparik27
    I found this something interesting while wandering over community which I would like to share. The post is whole about: DBML is not considering the table field's "Default value or Binding" setting which is a NotNull. I mean the field which can not be null but having default value set needs to be set IsDbGenerated = true in DBML file explicitly.Consider this situation: There is a simple tblEmployee table with below structure: The fields are simple. EmployeeID is a Primary Key with Identity Specification = True with Identity Seed = 1 to autogenerate numeric value for this field. EmployeeName and their EmailAddress to store in rest of 2 fields. And the last one is "DateAdded" with DateTime datatype which doesn't allow NULL but having Default Value/Binding with "GetDate()". That means if we don't pass any value to this field then SQL will insert current date in "DateAdded" field.So, I start with a new website, add a DBML file and dropped the said table to generate LINQ To SQL context class. Finally, I write a simple code snippet to insert data into the tblEmployee table; BUT, I am not passing any value to "DateAdded" field. Because I am considering SQL Server's "Default Value or Binding (GetDate())" setting to this field and understand that SQL will insert current date to this field.        using (TestDatabaseDataContext context = new TestDatabaseDataContext())        {            tblEmployee tblEmpObjet = new tblEmployee();            tblEmpObjet.EmployeeName = "KaushaL";            tblEmpObjet.EmployeeEmailAddress = "[email protected]";            context.tblEmployees.InsertOnSubmit(tblEmpObjet);            context.SubmitChanges();        }Here comes the twist when application give me below error:  This is something not expecting! From the error it clearly depicts that LINQ is passing NULL value to "DateAdded" Field while according to my understanding it should respect Sql Server's "Default value or Binding" setting for this field. A bit googling and I found very interesting related to this problem.When we set Primary Key to any field with "Identity Specification" Property set to true; DBML set one important property "IsDbGenerated=true" for this field. BUT, when we set "Default Value or Biding" property for some field; we need to explicitly tell the DBML/LINQ to let it know that this field is having default binding at DB side that needs to be respected if I don't pass any value. So, the solution is: You need to explicitly set "IsDbGenerated=true" for such field to tell the LINQ that the field is having default value or binding at Sql Server side so, please don't worry if i don't pass any value for it.You can select the field and set this property from property window in DBML Designer file or write the property in DBML.Designer.cs file directly. I have attached a working example with required table script with this post here. I hope this would be helpful for someone hunting for the same. Happy Discovery!

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 2

    - by MarkPearl
    So the brunt of my my very complex F# code has been done. Now it’s just putting the Silverlight stuff in. The first thing I did was add a new project to my solution. I gave it a name and VS2010 did the rest of the magic in creating the .Web project etc. In this instance because I want to take the MVVM approach and make use of commanding I have decided to make the frontend a Silverlight4 project. I now need move my F# code into a proper Silverlight Library. Warning – when you create the Silverlight Library VS2010 will ask you whether you want it to be based on Silverlight3 or Silverlight4. I originally went for Silverlight4 only to discover when I tried to compile my solution that I was given an error… Error 12 F# runtime for Silverlight version v4.0 is not installed. Please go to http://go.microsoft.com/fwlink/?LinkId=177463 to download and install matching.. After asking around I discovered that the Silverlight4 F# runtime is not available yet. No problem, the suggestion was to change the F# Silverlight Library to a Silverlight3 project however when going to the properties of the project file – even though I changed it to Silverlight3, VS2010 did not like it and kept reverting it to a Silverlight4 project. After a few minutes of scratching my head I simply deleted Silverlight4 F# Library project and created a new F# Silverlight Library project in Silverlight3 and VS2010 was happy. Now that the project structure is set up, rest is fairly simple. You need to add the Silverlight Library as a reference to the C# Silverlight Front End. Then setup your views, since I was following the MVVM pattern I made a Views & ViewModel folder and set up the relevant View and ViewModels. The MainPageViewModel file looks as follows using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; namespace IROAFrontEnd.ViewModels { public class MainPageViewModel : ViewModelBase { private string _iroaString; private string _inputCharacters; public string InputCharacters { get { return _inputCharacters; } set { if (_inputCharacters != value) { _inputCharacters = value; OnPropertyChanged("InputCharacters"); } } } public string IROAString { get { return _iroaString; } set { if (_iroaString != value) { _iroaString = value; OnPropertyChanged("IROAString"); } } } public ICommand MySpecialCommand { get { return new MyCommand(this); } } public class MyCommand : ICommand { readonly MainPageViewModel _myViewModel; public MyCommand(MainPageViewModel myViewModel) { _myViewModel = myViewModel; } public event EventHandler CanExecuteChanged; public bool CanExecute(object parameter) { return true; } public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } } } } One of the features I like in Silverlight4 is the new commanding. You will notice in my I have put the code under the command execute to reference to my F# module. At the moment this could be cleaned up even more, but will suffice for now.. public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } I then needed to set the view up. If we have a look at the MainPageView.xaml the xaml code will look like the following…. Nothing to fancy, but battleship grey for now… take careful note of the binding of the command in the button to MySpecialCommand which was created in the ViewModel. <UserControl x:Class="IROAFrontEnd.Views.MainPageView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBox Grid.Row="0" Text="{Binding InputCharacters, Mode=TwoWay}"/> <Button Grid.Row="1" Command="{Binding MySpecialCommand}"> <TextBlock Text="Generate"/> </Button> <TextBlock Grid.Row="2" Text="{Binding IROAString}"/> </Grid> </UserControl> Finally in the App.xaml.cs file we need to set the View and link it to the ViewModel. private void Application_Startup(object sender, StartupEventArgs e) { var myView = new MainPageView(); var myViewModel = new MainPageViewModel(); myView.DataContext = myViewModel; this.RootVisual = myView; }   Once this is done – hey presto – it worked. I typed in some “Test Input” and clicked the generate button and the correct Radio Operators Alphabet was generated. And that’s the end of my first very basic F# Silverlight application.

    Read the article

  • Having fun with Reflection

    - by Nick Harrison
    I was once asked in a technical interview what I could tell them about Reflection.   My response, while a little tongue in cheek was that "I can tell you it is one of my favorite topics to talk about" I did get a laugh out of that and it was a great ice breaker.    Reflection may not be the answer for everything, but it often can be, or maybe even should be.     I have posted in the past about my favorite CopyTo method.   It can come in several forms and is often very useful.   I explain it further and expand on the basic idea here  The basic idea is to allow reflection to loop through the properties of two objects and synchronize the ones that are in common.   I love this approach for data binding and passing data across the layers in an application. Recently I have been working on a project leveraging Data Transfer Objects to pass data through WCF calls.   We won't go into how the architecture got this way, but in essence there is a partial duplicate inheritance hierarchy where there is a related Domain Object for each Data Transfer Object.     The matching objects do not share a common ancestor or common interface but they will have the same properties in common.    By passing the problems with this approach, let's talk about how Reflection and our friendly CopyTo could make the most of this bad situation without having to change too much. One of the problems is keeping the two sets of objects in synch.   For this particular project, the DO has all of the functionality and the DTO is used to simply transfer data back and forth.    Both sets of object have parallel hierarchies with the same properties being defined at the corresponding levels.   So we end with BaseDO,  BaseDTO, GenericDO, GenericDTO, ProcessAreaDO,  ProcessAreaDTO, SpecializedProcessAreaDO, SpecializedProcessAreaDTO, TableDo, TableDto. and so on. Without using Reflection and a CopyTo function, tremendous care and effort must be made to keep the corresponding objects in synch.    New properties can be added at any level in the inheritance and must be kept in synch at all subsequent layers.    For this project we have come up with a clever approach of calling a base GetDo or UpdateDto making sure that the same method at each level of inheritance is called.    Each level is responsible for updating the properties at that level. This is a lot of work and not keeping it in synch can create all manner of problems some of which are very difficult to track down.    The other problem is the type of code that this methods tend to wind up with. You end up with code like this: Transferable dto = new Transferable(); base.GetDto(dto); dto.OfficeCode = GetDtoNullSafe(officeCode); dto.AccessIndicator = GetDtoNullSafe(accessIndicator); dto.CaseStatus = GetDtoNullSafe(caseStatus); dto.CaseStatusReason = GetDtoNullSafe(caseStatusReason); dto.LevelOfService = GetDtoNullSafe(levelOfService); dto.ReferralComments = referralComments; dto.Designation = GetDtoNullSafe(designation); dto.IsGoodCauseClaimed = GetDtoNullSafe(isGoodCauseClaimed); dto.GoodCauseClaimDate = goodCauseClaimDate;       One obvious problem is that this is tedious code.   It is error prone code.    Adding helper functions like GetDtoNullSafe help out immensely, but there is still an easier way. We can bypass the tedious code, by pass the complex inheritance tricks, and reduce all of this to a single method in the base class. TransferObject dto = new TransferObject(); CopyTo (this, dto); return dto; In the case of this one project, such a change eliminated the need for 20% of the total code base and a whole class of unit test cases that made sure that all of the properties were in synch. The impact of such a change also needs to include the on going time savings and the improvements in quality that can arise from them.    Developers who are not worried about keeping the properties in synch across mirrored object hierarchies are freed to worry about more important things like implementing business requirements.

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • Trigger Happy

    - by Tim Dexter
    Its been a while, I know, we’ll say no more OK? I’ll just write …In the latest BIP 11.1.1.6 release and if I’m really honest; the release before this (we'll call it dot 5 for brevity.) The boys and gals in the engine room have been real busy enhancing BIP with some new functionality. Those of you that use the scheduling engine in OBIEE may already know and use the ‘conditional scheduling’ feature. This allows you to be more intelligent about what reports get run and sent to folks on a scheduled basis. You create a ‘trigger’ analysis (answer) that is executed at schedule time prior to the main report. When the schedule rolls around, the trigger is run, if it returns rows, then the main report is run and delivered. If there are no rows returned, then the main report is not run. Useful right? Your users are not bombarded with 20 reports in their inbox every week that they need to wade throu. They get a handful that they know they need to look at. If you ensure you use conditional formatting in the report then they can find the anomalous data in the reports very quickly and move on to the rest of their day more quickly. You could even think of OBIEE as a virtual team member, scouring the data on your behalf 24/7 and letting you know when its found an issue.BI Publisher, wanting the team t-shirt and the khaki pants, has followed suit. You can now set up ‘triggers’ for it to execute before it runs the main report. Just like its big brother, if the scheduled report trigger returns rows of data; it then executes the main report. Otherwise, the report is skipped until the next schedule time rolls around. Sound familiar?BIP differs a little, in that you only need to construct a query to act as the trigger rather than a complete report. Let assume we have a monthly wage by department report on a schedule. We only want to send the report to managers if their departmental wages reach and/or exceed a certain amount. The toughest part about this is coming up with the SQL to test the business rule you want to implement. For my example, its not that tough: select d.department_name, sum(e.salary) as wage_total from employees e, departments d where d.department_id = e.department_id group by d.department_name having sum(e.salary) > 230000 We're looking for departments where the wage cost is greater than 230,000 Dexter Dollars! With a bit of messing I found out you can parametrize the query. Users can then set a value at schedule time if they need to. To create the trigger is straightforward enough. You can create multiple triggers for users to select at schedule time. Notice I also used a parameter in the query, :wamount. Note the matching parameter in the tree on the left. You also dont need to return multiple columns, one is fine, the key is if there are rows returned. You can build the rest of your report as usual. At scheduling time the Schedule tab has a bit more on it. If your users want to set the trigger, they check the Use Trigger box. The page will then pop fields to pick the appropriate trigger they want to use, even a trigger on another data model if needed. Note it will also ask for the parameter value associated with the trigger. At this point you should note that the data model does not make a distinction between trigger and data model (extract) parameters. So users will see the parameters on the General and Schedule tabs. If per chance you do need to just have a trigger parameters. You can just hide them from the report using the Parameters popup in the report designer, just un-check the 'Show' box I have tested the opposite case where you do not want main report parameters seen in the trigger section. BIP handles that for you! Once the report hits its allotted schedule time, the trigger is executed. Based on the results the report will either run or be 'skipped.' Now, you have a smarter scheduler that will only deliver reports when folks need to see them and take action on the contents. More official info here for developers and here for users.

    Read the article

  • Using Delegates in C# (Part 1)

    - by rajbk
    This post provides a very basic introduction of delegates in C#. Part 2 of this post can be read here. A delegate is a class that is derived from System.Delegate.  It contains a list of one or more methods called an invocation list. When a delegate instance is “invoked” with the arguments as defined in the signature of the delegate, each of the methods in the invocation list gets invoked with the arguments. The code below shows example with static and instance methods respectively: Static Methods 1: using System; 2: using System.Linq; 3: using System.Collections.Generic; 4: 5: public delegate void SayName(string name); 6: 7: public class Program 8: { 9: [STAThread] 10: static void Main(string[] args) 11: { 12: SayName englishDelegate = new SayName(SayNameInEnglish); 13: SayName frenchDelegate = new SayName(SayNameInFrench); 14: SayName combinedDelegate =(SayName)Delegate.Combine(englishDelegate, frenchDelegate); 15: 16: combinedDelegate.Invoke("Tom"); 17: Console.ReadLine(); 18: } 19: 20: static void SayNameInFrench(string name) { 21: Console.WriteLine("J'ai m'appelle " + name); 22: } 23: 24: static void SayNameInEnglish(string name) { 25: Console.WriteLine("My name is " + name); 26: } 27: } We have declared a delegate of type SayName with return type of void and taking an input parameter of name of type string. On line 12, we create a new instance of this delegate which refers to a static method - SayNameInEnglish.  SayNameInEnglish has the same return type and parameter list as the delegate declaration.  Once a delegate is instantiated, the instance will always refer to the same target. Delegates are immutable. On line 13, we create a new instance of the delegate but point to a different static method. As you may recall, a delegate instance encapsulates an invocation list. You create an invocation list by combining delegates using the Delegate.Combine method (there is an easier syntax as you will see later). When two non null delegate instances are combined, their invocation lists get combined to form a new invocation list. This is done in line 14.  On line 16, we invoke the delegate with the Invoke method and pass in the required string parameter. Since the delegate has an invocation list with two entries, each of the method in the invocation list is invoked. If an unhandled exception occurs during the invocation of one of these methods, the exception gets bubbled up to the line where the invocation was made (line 16). If a delegate is null and you try to invoke it, you will get a System.NullReferenceException. We see the following output when the method is run: My name is TomJ'ai m'apelle Tom Instance Methods The code below outputs the same results as before. The only difference here is we are creating delegates that point to a target object (an instance of Translator) and instance methods which have the same signature as the delegate type. The target object can never be null. We also use the short cut syntax += to combine the delegates instead of Delegate.Combine. 1: public delegate void SayName(string name); 2: 3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: Translator translator = new Translator(); 9: SayName combinedDelegate = new SayName(translator.SayNameInEnglish); 10: combinedDelegate += new SayName(translator.SayNameInFrench); 11:  12: combinedDelegate.Invoke("Tom"); 13: Console.ReadLine(); 14: } 15: } 16: 17: public class Translator { 18: public void SayNameInFrench(string name) { 19: Console.WriteLine("J'ai m'appelle " + name); 20: } 21: 22: public void SayNameInEnglish(string name) { 23: Console.WriteLine("My name is " + name); 24: } 25: } A delegate can be removed from a combination of delegates by using the –= operator. Removing a delegate from an empty list or removing a delegate that does not exist in a non empty list will not result in an exception. Delegates are invoked synchronously using the Invoke method. We can also invoke them asynchronously using the BeginInvoke and EndInvoke methods which are compiler generated.

    Read the article

  • Functional Adaptation

    - by Charles Courchaine
    In real life and OO programming we’re often faced with using adapters, DVI to VGA, 1/4” to 1/8” audio connections, 110V to 220V, wrapping an incompatible interface with a new one, and so on.  Where the adapter pattern is generally considered for interfaces and classes a similar technique can be applied to method signatures.  To be fair, this adaptation is generally used to reduce the number of parameters but I’m sure there are other clever possibilities to be had.  As Jan questioned in the last post, how can we use a common method to execute an action if the action has a differing number of parameters, going back to the greeting example it was suggested having an AddName method that takes a first and last name as parameters.  This is exactly what we’ll address in this post. Let’s set the stage with some review and some code changes.  First, our method that handles the setup/tear-down infrastructure for our WCF service: 1: private static TResult ExecuteGreetingFunc<TResult>(Func<IGreeting, TResult> theGreetingFunc) 2: { 3: IGreeting aGreetingService = null; 4: try 5: { 6: aGreetingService = GetGreetingChannel(); 7: return theGreetingFunc(aGreetingService); 8: } 9: finally 10: { 11: CloseWCFChannel((IChannel)aGreetingService); 12: } 13: } Our original AddName method: 1: private static string AddName(string theName) 2: { 3: return ExecuteGreetingFunc<string>(theGreetingService => theGreetingService.AddName(theName)); 4: } Our new AddName method: 1: private static int AddName(string firstName, string lastName) 2: { 3: return ExecuteGreetingFunc<int>(theGreetingService => theGreetingService.AddName(firstName, lastName)); 4: } Let’s change the AddName method, just a little bit more for this example and have it take the greeting service as a parameter. 1: private static int AddName(IGreeting greetingService, string firstName, string lastName) 2: { 3: return greetingService.AddName(firstName, lastName); 4: } The new signature of AddName using the Func delegate is now Func<IGreeting, string, string, int>, which can’t be used with ExecuteGreetingFunc as is because it expects Func<IGreeting, TResult>.  Somehow we have to eliminate the two string parameters before we can use this with our existing method.  This is where we need to adapt AddName to match what ExecuteGreetingFunc expects, and we’ll do so in the following progression. 1: Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 2: Func<IGreeting, string, int> -> Func<IGreeting, int>   For the first step, we’ll create a method using the lambda syntax that will “eliminate” the last name parameter: 1: string lastNameToAdd = "Smith"; 2: //Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 3: Func<IGreeting, string, int> addName = (greetingService, firstName) => AddName(greetingService, firstName, lastNameToAdd); The new addName method gets us one step close to the signature we need.  Let’s say we’re going to call this in a loop to add several names, we’ll take the final step from Func<IGreeting, string, int> -> Func<IGreeting, int> in line as a lambda passed to ExecuteGreetingFunc like so: 1: List<string> firstNames = new List<string>() { "Bob", "John" }; 2: int aID; 3: foreach (string firstName in firstNames) 4: { 5: //Func<IGreeting, string, int> -> Func<IGreeting, int> 6: aID = ExecuteGreetingFunc<int>(greetingService => addName(greetingService, firstName)); 7: Console.WriteLine(GetGreeting(aID)); 8: } If for some reason you needed to break out the lambda on line 6 you could replace it with 1: aID = ExecuteGreetingFunc<int>(ApplyAddName(addName, firstName)); and use this method: 1: private static Func<IGreeting, int> ApplyAddName(Func<IGreeting, string, int> addName, string lastName) 2: { 3: return greetingService => addName(greetingService, lastName); 4: } Splitting out a lambda into its own method is useful both in this style of coding as well as LINQ queries to improve the debugging experience.  It is not strictly necessary to break apart the steps & functions as was shown above; the lambda in line 6 (of the foreach example) could include both the last name and first name instead of being composed of two functions.  The process demonstrated above is one of partially applying functions, this could have also been done with Currying (also see Dustin Campbell’s excellent post on Currying for the canonical curried add example).  Matthew Podwysocki also has some good posts explaining both Currying and partial application and a follow up post that further clarifies the difference between Currying and partial application.  In either technique the ultimate goal is to reduce the number of parameters passed to a function.  Currying makes it a single parameter passed at each step, where partial application allows one to use multiple parameters at a time as we’ve done here.  This technique isn’t for everyone or every problem, but can be extremely handy when you need to adapt a call to something you don’t control.

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >