Search Results

Search found 18319 results on 733 pages for 'reference parameters'.

Page 125/733 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • How to debug/break in codedom compiled code

    - by Jason Coyne
    I have an application which loads up c# source files dynamically and runs them as plugins. When I am running the main application in debug mode, is it possible to debug into the dynamic assembly? Obviously setting breakpoints is problematic, since the source is not part of the original project, but should I be able to step into, or break on exceptions for the code? Is there a way to get codedom to generate PDBs for this or something? Here is the code I am using for dynamic compliation. CSharpCodeProvider codeProvider = new CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); //codeProvider. ICodeCompiler icc = codeProvider.CreateCompiler(); CompilerParameters parameters = new CompilerParameters(); parameters.GenerateExecutable = false; parameters.GenerateInMemory = true; parameters.CompilerOptions = string.Format("/lib:\"{0}\"", Application.StartupPath); parameters.ReferencedAssemblies.Add("System.dll"); parameters.ReferencedAssemblies.Add("System.Core.dll"); CompilerResults results = icc.CompileAssemblyFromSource(parameters, Source); DLL.CreateInstance(t.FullName, false, BindingFlags.Default, null, new object[] { engine }, null, null);

    Read the article

  • Getting value from pointer

    - by Eric
    Hi, I'm having problem getting the value from a pointer. I have the following code in C++: void* Nodo::readArray(VarHash& var, string varName, int posicion, float& d) { //some code before... void* res; float num = bit.getFloatFromArray(arregloTemp); //THIS FUNCTION RETURN A FLOAT AND IT'S OK cout << "NUMBER " << num << endl; d = num; res = &num; return res } int main() { float d = 0.0; void* res = n.readArray(v, "c", 0, d); //THE VALUES OF THE ARRAY ARE: {65.5, 66.5}; float* car3 = (float*)res; cout << "RESULT_READARRAY " << *car3 << endl; cout << "FLOAT REFERENCE: " << d << endl; } The result of running this code is the following: NUMBER 65.5 RESULT_READARRAY -1.2001 //INCORRECT IT SHOULD BE LIKE NUMBER FLOAT REFERENCE: 65.5 //CORRECT NUMBER 66.5 RESULT_READARRAY -1.2001 //INCORRECT IT SHOULD BE LIKE NUMBER FLOAT REFERENCE: 66.5 //CORRECT For some reason, when I get the value of the pointer returned by the function called readArray is incorrect. I'm passing a float variable(d) as a reference in the same function just to verify that the value is ok, and as you can see, THE FLOAT REFERENCE matches the NUMBER. If I declare the variable num(read array) as a static float, the first RESULT_READARRAY will be 65.5, that is correct, however, the next value will be the same instead of 66.5. Let me show you the result of running the code using static float variable: NUMBER 65.5 RESULT_READARRAY 65.5 //PERFECT FLOAT REFERENCE: 65.5 //¨PERFECT NUMBER 65.5 //THIS IS INCORRECT, IT SHOULD BE 66.5 RESULT_READARRAY 65.5 FLOAT REFERENCE: 65.5 Do you know how can I get the correct value returned by the function called readArray()?

    Read the article

  • Do something else if ReadWriteSlimlock is held

    - by user43838
    Hi everyone, I have implemented ReaderWriterLockSlim, Now i don't want it to wait at the lock. I want to do something else if the lock is held. I considered using is isWriterLockHeld but it does not makes much sense to me, Since if two threads come in at the same time and enter the if statement at the same time one will still be waiting at the lock here is my code. ReaderWriterLockSlim rw = GetLoadingLock(parameters); rw = GetLoadingLock(parameters); try { rw.EnterWriteLock(); item = this.retrieveCacheItem(parameters.ToString(), false); if (item != null) { parameters.DataCameFromCache = true; // if the data was found in the cache, return it immediately return item.data; } else { try { object loaditem = null; itemsLoading[parameters.ToString()] = true; loaditem = this.retrieveDataFromStore(parameters); return loaditem; } finally { itemsLoading.Remove(parameters.ToString()); } } } finally { rw.ExitWriteLock(); } Can anyone please guide me in the right direction with this. Thanks

    Read the article

  • C# sqlite syntax in ASP.NET?

    - by acidzombie24
    -edit- This is no longer relevant and the question doesnt make sense to me anymore. I think i wanted to know how to create tables or know if the syntax is the same from winform to ASP.NET I am very use to sqlite http://sqlite.phxsoftware.com/ and would like to create a DB in a similar style. How do i do this? it doesnt need to be the same, just similar enough for me to enjoy. An example of the syntax. connection = new SQLiteConnection("Data Source=" + sz +";Version=3"); command = new SQLiteCommand(connection); connection.Open(); command.CommandText = "CREATE TABLE if not exists miscInfo(key TEXT, " + "value TEXT, UNIQUE (key));"; command.ExecuteNonQuery(); The @name is a symbol and command.Parameters.Add("@userDesc", DbType.String).Value = d.userDesc; replaces the symbol with escaped values/texts/blob command.CommandText = "INSERT INTO discData(rootfolderID, time, volumeName, discLabel, " + "userTitle, userDesc) "+ "VALUES(@rootfolderID, @time, @volumeName, @discLabel, " + "@userTitle, @userDesc); " + "SELECT last_insert_rowid() AS RecordID;"; command.Parameters.Add("@rootfolderID", DbType.Int64).Value = d.rootfolderID; command.Parameters.Add("@time", DbType.Int64).Value = d.time; command.Parameters.Add("@volumeName", DbType.String).Value = d.volumeName; command.Parameters.Add("@discLabel", DbType.String).Value = d.discLabel; command.Parameters.Add("@userTitle", DbType.String).Value = d.userTitle; command.Parameters.Add("@userDesc", DbType.String).Value = d.userDesc; d.discID = (long)command.ExecuteScalar();

    Read the article

  • using a stored procedure for login in c#

    - by Jin Yim
    Hi all, If I run a store procedure with two parameter values (admin, admin) (parameters : admin, admin) I get the following message : Session_UID User_Group_Name Sys_User_Name NULLAdministratorsNTMSAdmin No rows affected. (1 row(s) returned) @RETURN_VALUE = 0 Finished running [dbo].[p_SYS_Login]. -- To get the same message in c# I used the code following : string strConnection = Settings.Default.ConnectionString; using (SqlConnection conn = new SqlConnection(strConnection)) { using (SqlCommand cmd = new SqlCommand()) { SqlDataReader rdr = null; cmd.Connection = conn; cmd.CommandText = "p_SYS_Login"; cmd.CommandType = CommandType.StoredProcedure; SqlParameter paramReturnValue = new SqlParameter(); paramReturnValue.ParameterName = "@RETURN_VALUE"; paramReturnValue.SqlDbType = SqlDbType.Int; paramReturnValue.SourceColumn = null; paramReturnValue.Direction = ParameterDirection.ReturnValue; cmd.Parameters.Add(paramReturnValue); cmd.Parameters.Add(paramGroupName); cmd.Parameters.Add(paramUserName); cmd.Parameters.AddWithValue("@Sys_Login", "admin"); cmd.Parameters.AddWithValue("@Sys_Password", "admin"); try { conn.Open(); rdr = cmd.ExecuteReader(); string test = (string)cmd.Parameters["@RETURN_VALUE"].Value; while (rdr.Read()) { Console.WriteLine("test : " + rdr[0]); } } catch (Exception ex) { string message = ex.Message; string caption = "MAVIS Exception"; MessageBoxButtons buttons = MessageBoxButtons.OK; MessageBox.Show( message, caption, buttons, MessageBoxIcon.Warning, MessageBoxDefaultButton.Button1); } finally { cmd.Dispose(); conn.Close(); } } } but I get nothing in SqlDataReader rdr ; is there something I am missing ? Thanks

    Read the article

  • Self referencing userdata and garbage collection

    - by drtwox
    Because my userdata objects reference themselves, I need to delete and nil a variable for the garbage collector to work. Lua code: obj = object:new() -- -- Some time later obj:delete() -- Removes the self reference obj = nil -- Ready for collection C Code: typedef struct { int self; // Reference to the object // Other members and function references removed } Object; // Called from Lua to create a new object static int object_new( lua_State *L ) { Object *obj = lua_newuserdata( L, sizeof( Object ) ); // Create the 'self' reference, userdata is on the stack top obj->self = luaL_ref( L, LUA_REGISTRYINDEX ); // Put the userdata back on the stack before returning lua_rawgeti( L, LUA_REGISTRYINDEX, obj->self ); // The object pointer is also stored outside of Lua for processing in C return 1; } // Called by Lua to delete an object static int object_delete( lua_State *L ) { Object *obj = lua_touserdata( L, 1 ); // Remove the objects self reference luaL_unref( L, LUA_REGISTRYINDEX, obj->self ); return 0; } Is there some way I can set the object to nil in Lua, and have the delete() method called automatically? Alternatively, can the delete method nil all variables that reference the object? Can the self reference be made 'weak'?

    Read the article

  • Not Able to call The method Asynchronously in the Unit Test.

    - by user43838
    Hi everyone, I am trying to call a method that passes an object called parameters. public void LoadingDataLockFunctionalityTest() { DataCache_Accessor target = DataCacheTest.getNewDataCacheInstance(); target.itemsLoading.Add("WebFx.Caching.TestDataRetrieverFactorytestsync", true); DataParameters parameters = new DataParameters("WebFx.Core", "WebFx.Caching.TestDataRetrieverFactory", "testsync"); parameters.CachingStrategy = CachingStrategy.TimerDontWait; parameters.CacheDuration = 0; string data = (string)target.performGetForTimerDontWaitStrategy(parameters); TestSyncDataRetriever.SimulateLoadingForFiveSeconds = true; Thread t1 = new Thread(delegate() { string s = (string)target.performGetForTimerDontWaitStrategy(parameters); Console.WriteLine(s ?? String.Empty); }); t1.Start(); t1.Join(); Thread.Sleep(1000); ReaderWriterLockSlim rw = DataCache_Accessor.GetLoadingLock(parameters); Assert.IsTrue(rw.IsWriteLockHeld); Assert.IsNotNull(data); } My test is failing all the time and i am not able step through the method.. Can someone please put me in the right direction Thanks

    Read the article

  • How to get a nicely formatted PHP Web Service response?

    - by Bruno
    I called an API like this: $service = new Class_Service(); $parameters = new GetClasses(); $parameters->Request = new GetClassesRequest(); $parameters->Request->SourceCredentials = new SourceCredentials(); $parameters->Request->SourceCredentials->SourceName = "Name"; $parameters->Request->SourceCredentials->Password = "Pass"; $parameters->Request->SourceCredentials->SiteIDs = array( 12 ); $classes = $service->GetClasses($parameters); var_dump($classes); And received a response like this: object(GetClassesResponse)#7 (1) { ["GetClassesResult"]=> object(GetClassesResult)#8 (6 { ["Classes"]=> object(stdClass)#9 (1) { ["Class"]=> array(25) { [0]=> object(Mi_Class)#10 (21) { ["ClassScheduleID"]=> int(15) ["Visits"]=> NULL ["Clients"]=> NULL ["Location"]=> object(Location)#11 (30) { ["BusinessID"]=> NULL ["SiteID"]=> int(12) ["BusinessDescription"]=> NULL ["AdditionalImageURLs"]=> object(stdClass)#12 (0) { } ["FacilitySquareFeet"]=> NULL Does a response normally look like this? How do I go about getting the data in a formatted manner?

    Read the article

  • XSLT, process elements one by one

    - by qui
    Hi I am quite weak at XSLT so this might seem obvious. Here is some sample XML <term> <name>cholecystocolonic fistula</name> <definition>blah blah</definition> <reference>cholecystocolostomy</reference> </term> And here is the XSLT I wrote a while ago to process it <xsl:template name="term"> { "dictitle": "<xsl:value-of select="name" disable-output-escaping="yes" />", "html": "<xsl:value-of select="definition" disable-output-escaping="yes"/>", "referece": "<xsl:value-of select="reference" disable-output-escaping="yes"/> } </xsl:template> Basically I am creating JSON from the XML. The requirements have now changed so that now the XML can have more than one definition tag and reference tag. They can appear in any order, i.e definition, reference, reference, definition, reference. How can I update the XSLT to accommodate this? Probably worth mentioning that because my XSLT processor is using .NET I can only use XSLT 1.0 commands. Many thanks!

    Read the article

  • iPad App Cookies Storage?

    - by Aakburns
    Hi, I have an application that sends you to one website that shows a login form. I've read up on cookies from the apple reference (http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSHTTPCookie_Class/Reference/Reference.html#//apple_ref/occ/instm/NSHTTPCookie/initWithProperties:) I'm honestly just not understanding this at all. Can someone please explain how to get cookies working for an app? Post sample code? Thanks.

    Read the article

  • Ajax doesn't work on remote server .

    - by Nuha
    Hello . when I Implemented chatting Function , I use Ajax to send messages between file to another . so , it is working well on local host . but , when I upload it in to remote server it doesn't work. can U tell me ,why ? is an Ajax need Special configuration ? Ajax code : function Ajax_Send(GP,URL,PARAMETERS,RESPONSEFUNCTION){? var xmlhttp? try{xmlhttp=new ActiveXObject("Msxml2.XMLHTTP")}? catch(e){? try{xmlhttp=new ActiveXObject("Microsoft.XMLHTTP")}? catch(e){? try{xmlhttp=new XMLHttpRequest()}? catch(e){? alert("Your Browser Does Not Support AJAX")}}}? ? err=""? if (GP==undefined) err="GP "? if (URL==undefined) err +="URL "? if (PARAMETERS==undefined) err+="PARAMETERS"? if (err!=""){alert("Missing Identifier(s)\n\n"+err);return false;}? ? xmlhttp.onreadystatechange=function(){? if (xmlhttp.readyState == 4){? if (RESPONSEFUNCTION=="") return false;? eval(RESPONSEFUNCTION(xmlhttp.responseText))? }? }? ? if (GP=="GET"){? URL+="?"+PARAMETERS? xmlhttp.open("GET",URL,true)? xmlhttp.send(null)? }? ? if (GP="POST"){? PARAMETERS=encodeURI(PARAMETERS)? xmlhttp.open("POST",URL,true)? xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded")? xmlhttp.setRequestHeader("Content-length",PARAMETERS.length)? xmlhttp.setRequestHeader("Connection", "close")? xmlhttp.send(PARAMETERS)? }? }

    Read the article

  • How can I improve the recursion capabilities of my ECMAScript implementation?

    - by ChaosPandion
    After some resent tests I have found my implementation cannot handle very much recursion. Although after I ran a few tests in Firefox I found that this may be more common than I originally thought. I believe the basic problem is that my implementation requires 3 calls to make a function call. The first call is made to a method named Call that makes sure the call is being made to a callable object and gets the value of any arguments that are references. The second call is made to a method named Call which is defined in the ICallable interface. This method creates the new execution context and builds the lambda expression if it has not been created. The final call is made to the lambda that the function object encapsulates. Clearly making a function call is quite heavy but I am sure that with a little bit of tweaking I can make recursion a viable tool when using this implementation. public static object Call(ExecutionContext context, object value, object[] args) { var func = Reference.GetValue(value) as ICallable; if (func == null) { throw new TypeException(); } if (args != null && args.Length > 0) { for (int i = 0; i < args.Length; i++) { args[i] = Reference.GetValue(args[i]); } } var reference = value as Reference; if (reference != null) { if (reference.IsProperty) { return func.Call(reference.Value, args); } else { return func.Call(((EnviromentRecord)reference.Value).ImplicitThisValue(), args); } } return func.Call(Undefined.Value, args); } public object Call(object thisObject, object[] arguments) { var lexicalEnviroment = Scope.NewDeclarativeEnviroment(); var variableEnviroment = Scope.NewDeclarativeEnviroment(); var thisBinding = thisObject ?? Engine.GlobalEnviroment.GlobalObject; var newContext = new ExecutionContext(Engine, lexicalEnviroment, variableEnviroment, thisBinding); Engine.EnterContext(newContext); var result = Function.Value(newContext, arguments); Engine.LeaveContext(); return result; }

    Read the article

  • output parameter into label

    - by MyHeadHurts
    instead of returning my output paremeter value in my stored procedure to my label it returns the default value i set my output parameter to. why cant i put my output parameter into my text label Dim reader As SqlDataReader cmd.Parameters.AddWithValue("@tour", "2365") cmd.Parameters.Add("@tourname", SqlDbType.VarChar) cmd.Parameters("@tourname").Direction = ParameterDirection.Output cmd.CommandText = "test" cmd.CommandType = CommandType.StoredProcedure cmd.Connection = conn conn.Open() reader = cmd.ExecuteReader() Dim myTable As DataTable = New DataTable() myTable.Load(reader) DropDownList1.DataSource = myTable DropDownList1.DataTextField = "ddate7" DropDownList1.DataBind() Label1.Text = cmd.Parameters("@tourname").ToString conn.Close()

    Read the article

  • asp.net listbox

    - by lodun
    Why this code don't work,when i want run this code vwd 2008 express show me this error: Object reference not set to an instance of an object. Line 73: kom.Parameters.Add("@subcategories", SqlDbType.Text).Value = s_categoreis.SelectedItem.ToString(); This is my ascx file: <asp:ListBox ID="categories" runat="server" Height="380px" CssClass="kat" AutoPostBack="true" DataSourceID="SqlDataSource1" DataTextField="Categories" DataValueField="ID" onselectedindexchanged="kategorije_SelectedIndexChanged"></asp:ListBox> <asp:Button ID="Button1" CssClass="my" runat="server" Text="click" onclick="Button1_Click" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:ListBox ID="s_categoreis" CssClass="pod" Height="150px" Enabled="true" runat="server"></asp:ListBox></ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="categories" EventName="SelectedIndexChanged" /> </Triggers> </asp:UpdatePanel> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:estudent_piooConnectionString %>" SelectCommand="SELECT [ID], [Categories] FROM [categories]"> </asp:SqlDataSource> and this is my ascx.cs: SqlConnection veza; SqlCommand kom = new SqlCommand(); SqlParameter par1 = new SqlParameter(); SqlParameter par2 = new SqlParameter(); SqlParameter par3 = new SqlParameter(); SqlParameter par4 = new SqlParameter(); SqlParameter par5 = new SqlParameter(); SqlParameter par6 = new SqlParameter(); SqlParameter par7 = new SqlParameter(); SqlParameter par8 = new SqlParameter(); SqlParameter par9 = new SqlParameter(); protected void Page_Load(object sender, EventArgs e) { Listapod_kategorije(1); } protected void kategorije_SelectedIndexChanged(object sender, EventArgs e) { Listapod_kategorije(Convert.ToInt32(kategorije.SelectedValue)); } private void Listapod_kategorije(int broj) { SqlDataSource ds = new SqlDataSource(); ds.ConnectionString = ConfigurationManager.ConnectionStrings["estudent_piooConnectionString"].ConnectionString; ds.SelectCommand = "Select * from pod_kategorije where kat_id=" + broj; pod_kategorije.DataSource = ds; pod_kategorije.DataTextField = "pkategorija"; pod_kategorije.DataValueField = "ID"; pod_kategorije.DataBind(); } protected void Button1_Click(object sender, EventArgs e) { Guid jk = new Guid(); object datum = DateTime.Now; veza = new SqlConnection(@"server=85.94.76.170\PADME; database=estudent_pioo;uid=pioo;pwd=1234567"); Random broj = new Random(); int b_kor = broj.Next(1, 1000); kom.Parameters.Add("@text", SqlDbType.Text).Value = str; kom.Parameters.Add("@user", SqlDbType.UniqueIdentifier).Value = jk; kom.Parameters.Add("@date", SqlDbType.DateTime).Value = datum; kom.Parameters.Add("@visits", SqlDbType.Int).Value = 0; kom.Parameters.Add("@answers", SqlDbType.Int).Value = 0; kom.Parameters.Add("@username", SqlDbType.Text).Value = "unknown_" + b_kor.ToString(); ; kom.Parameters.Add("@categories", SqlDbType.Text).Value = categories.SelectedItem.ToString(); kom.Parameters.Add("@sub_categories", SqlDbType.Text).Value = s_categoreis.SelectedItem.ToString(); veza.Open(); kom.ExecuteNonQuery(); veza.Close(); Response.Redirect("default.aspx");

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Android Camera in Portrait on SurfaceView

    - by Prasanna
    Hello, I tried several things to try to get the camera preview to show up in portrait on a SurfaceView. Nothing worked. I am testing on a Droid that has 2.0.1. I tried: 1) forcing the layout to be portrait by: this.setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE); 2) using Camera.Parameters parameters = camera.getParameters(); parameters.set("orientation", "portrait"); parameters.setRotation(90); camera.setParameters(parameters); Is there something else I can try? If this a bug in Android or the phone how can I make sure that this is the case so that I have proof to inform the client? Thanks, Prasanna

    Read the article

  • Add/remove Cake named URL parameter for a link

    - by sibidiba
    Using CakePHP 1.3 there are named parameters in the URL like .../name:value/... These are used for example by pagination links .../page:2/key:date/sort:desc/... How to generate links with HtmlHelper::link() adding/deleting such named parameters from the current URL? Basically I want create links to add/remove/modify the category:ID named parameter in the current URL. It must not touch the URL, anchor, other named parameters, GET parameters in the URL. Or how can I pass named parameters to HtmlHelper::link()?

    Read the article

  • How do I pass W3 validation for Google checkout url?

    - by Dinesh
    When I do validate the page in W3 validation, I got few errors with below code, <input type="image" name="Google Checkout" alt="Fast checkout through Google" src="https://sandbox.google.com/checkout/buttons/checkout.gif?merchant_id=xxxxxxxxx&w=168&h=44&style=white&variant=text&loc=en_US" / Errors are as follows, 1.cannot generate system identifier for general entity "w" 2.reference to entity "w" for which no system identifier could be generated 3.general entity "h" not defined and no default entity 4.reference to entity "h" for which no system identifier could be generated 5.general entity "style" not defined and no default entity 6.reference to entity "style" for which no system identifier could be generated 7.general entity "variant" not defined and no default entity 8.reference to entity "variant" for which no system identifier could be generated 9.general entity "loc" not defined and no default entity 10.reference to entity "loc" for which no system identifier could be generated This is the only errors comes from the URL; is there way to pass W3 validation for this URL.

    Read the article

  • ODP.NET Code Example Critque or best practices

    - by andrewWinn
    I currently have a DataAccess Layer in Vb.Net. I am not too happy with my implementation of both my ExecuteQuery (as DataSet) and ExecuteNonQuery functions. Does anyone have any code that I could see? My code just doesn't look clean. Any thoughts or critiques on it would be appreciated also. Using odpConn As OracleConnection = New OracleConnection(_myConnString) odpConn.Open() If _beginTransaction Then txn = odpConn.BeginTransaction(IsolationLevel.Serializable) End If Try Using odpCmd As OracleCommand = odpConn.CreateCommand() odpCmd.CommandType = CommandType.Text odpCmd.CommandText = sSql For i = 0 To parameters.Parameters.Count - 1 Dim prm As New OracleParameter prm = DirectCast(parameters.Parameters(i), ICloneable).Clone odpCmd.Parameters.Add(prm) Next If (odpConn.State = ConnectionState.Closed) Then odpConn.Open() End If iToReturn = odpCmd.ExecuteNonQuery() If _beginTransaction Then txn.Commit() End If End Using Catch txn.Rollback() End Try End Using

    Read the article

  • How to select the value of the xsi:type attribute in SQL Server?

    - by kralizek
    Considering this xml document: DECLARE @X XML (DOCUMENT search.SearchParameters) = '< parameters xmlns="http://www.educations.com/Search/Parameters.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> < parameter xsi:type="category" categoryID="38" /> < /parameters>'; I'd like to access the value of the attribute "type". According to this blog post, the xsi:type attribute is special and can't be accessed by usal keywords/functions. How can I do it? PS: I tried with WITH XMLNAMESPACES ( 'http://www.educations.com/Search/Parameters.xsd' as p, 'http://www.w3.org/2001/XMLSchema-instance' as xsi ) SELECT @X.value('(/p:parameters/p:parameter/@xsi:type)[1]','nvarchar(max)') but it didn't work.

    Read the article

  • Accessing object property as string and setting its value

    - by ludicco
    Hello there, I have an object in csharp from the class Account each account have a owner, reference, etc. One way I can access an accounts properties is through accessors like account.Reference; but I would like to be able to access it using dynamic string selectors like: account["PropertyName"]; just like in javascript. so I would have account["Reference"] which would return the value...but I also would like to be able to sign a new value after that like: account["Reference"] = "124ds4EE2s"; I've noticed I can use DataBinder.Eval(account,"Reference") to get a property based on a string, but using this I can't sign a value to the property. Any idea on how I could do that? Thanks a lot

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >