Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 173/920 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Unable to change the value of the variable

    - by Legend
    I'm using a discrete event simulator called ns-2 that was built using Tcl and C++. I was trying to write some code in TCL: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 2.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global state global ns $ns at-now "set state [expr $state+1]" puts "Changed value of state to $state" } $ns run Here's the output: At 0.0 value of state is: 0 Changed value of state to 0 At 2.0 values of state is: 0 The value of state does not seem to change. I am not sure if I am doing something wrong in using TCL. Anyone has an idea as to what might be going wrong here? EDIT: Thanks for the help. Actually, ns-2 is something over which I do not have much control (unless I recompile the simulator itself). I tried out the suggestions and here's the output: for the code: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 9.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global ns set ::state [expr {$::state+1}] $ns at-now "puts \"At [$ns now] changed value of state to $::state\"" } $ns run the output is: At 0.0 value of state is: 0 At 1 changed value of state to 1 At 2.0 values of state is: 0 And for the code: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 9.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global ns set ::state [expr {$::state+1}] $ns at 1.0 {puts "At 1.0 values of state is: $::state"} } $ns run the output is: At 0.0 value of state is: 0 At 1.0 values of state is: 1 At 2.0 values of state is: 0 Doesn't seem to work... Not sure if its a problem with ns2 or my code...

    Read the article

  • Redirect to default action in Struts 2

    - by topher-j
    I have an action with an empty string for name defined in the root namespace, and I want to redirect to that action from another action if a certain result is found, but it doesn't seem to work. Here's the default action <action name="" class="com.example.actions.HomeAction"> <result name="success" type="freemarker">freemarker/home.ftl</result> </action> And I'm defining the redirect in the global-results for the package: <global-results> <result name="sendToRouting" type="redirectAction"> <param name="actionName"></param> <param name="namespace">/</param> </result> </global-results> I've tried taking out the actionName parameter, but that doesn't work. If I put a name in for the HomeAction and reference it by name in the global-results it works, so I'm assuming the problem is lack of action name for the redirect. Any thoughts?

    Read the article

  • How to implement "business rules" in Rails?

    - by Zabba
    What is the way to implement "business rules" in Rails? Let us say I have a car and want to sell it: car = Cars.find(24) car.sell car.sell method will check a few things: does current_user own the car? check: car.user_id == current_user.id is the car listed for sale in the sales catalog? check: car.catalogs.ids.include? car.id if all o.k. then car is marked as sold. I was thinking of creating a class called Rules: class Rules def initialize(user,car) @user = user @car = car end def can_sell_car? @car.user_id == @user.id && @car.catalogs.ids.include? @car.id end end And using it like this: def Car def sell if Rules.new(current_user,self).can_sell_car ..sell the car... else @error_message = "Cannot sell this car" nil end end end As for getting the current_user, I was thinking of storing it in a global variable? I think that whenever a controller action is called, it's always a "fresh" call right? If so then storing the current user as a global variable should not introduce any risks..(like some other user being able to access another user's details) Any insights are appreciated! UPDATE So, the global variable route is out! Thanks to PeterWong for pointing out that global variables persist! I've now thinking of using this way: class Rules def self.can_sell_car?(current_user, car) ......checks.... end end And then calling Rules.can_sell_car?(current_user,@car) from the controller action. Any thoughts on this new way?

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • How can I prevent default_environment variables from getting set by Capistrano's sudo action?

    - by Logan Koester
    My deploy.rb sets some environment variables to use the regular user's local Ruby rather than the system-wide one. set :default_environment, { :PATH => '/home/myapp/.rvm/bin:/home/myapp/.rvm/bin:/home/myapp/.rvm/rubies/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global/bin:/home/myapp/bin:/usr/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin:/usr/games', :RUBY_VERSION => 'ruby-1.9.1-p378', :GEM_HOME => '/home/myapp/.rvm/gems/ruby-1.9.1-p378', :GEM_PATH => '/home/myapp/.rvm/gems/ruby-1.9.1-p378:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global' } Naturally, when a task is using sudo, I would expect the system-wide ruby to be used instead. But it seems the environment variables are being set anyway, which is obviously invalid for the root user and returns an error: executing "sudo -p 'sudo password: ' /etc/init.d/god stop" servers: ["myapp.com"] [myapp.com] executing command command finished failed: "env PATH=/home/myapp/.rvm/bin:/home/myapp/.rvm/bin:/home/myapp/.rvm/rubies/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global/bin:/home/myapp/bin:/usr/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin:/usr/games RUBY_VERSION=ruby-1.9.1-p378 GEM_HOME=/home/myapp/.rvm/gems/ruby-1.9.1-p378 GEM_PATH=/home/myapp/.rvm/gems/ruby-1.9.1-p378:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global sh -c 'sudo -p '\\''sudo password: '\\'' /etc/init.d/god stop'" on myapp.com It makes no difference whether I use capistrano's sudo "system call" or the regular run "sudo system call". How can I avoid this?

    Read the article

  • How to access hidden template in unnamed namespace?

    - by Johannes Schaub - litb
    Here is a tricky situation, and i wonder what ways there are to solve it namespace { template <class T> struct Template { /* ... */ }; } typedef Template<int> Template; Sadly, the Template typedef interferes with the Template template in the unnamed namespace. When you try to do Template<float> in the global scope, the compiler raises an ambiguity error between the template name and the typedef name. You don't have control over either the template name or the typedef-name. Now I want to know whether it is possible to: Create an object of the typedefed type Template (i.e Template<int>) in the global namespace. Create an object of the type Template<float> in the global namespace. You are not allowed to add anything to the unnamed namespace. Everything should be done in the global namespace. This is out of curiosity because i was wondering what tricks there are for solving such an ambiguity. It's not a practical problem i hit during daily programming.

    Read the article

  • How do you detect when the Flowplayer has started playing the video?

    - by Alex
    Hi. I'm using the Flowplayer Flash video player to play MP4 videos inside an AnythingSlider. I need to detect when the user has clicked the start button on the video so to stop the slideshow and allow the user to view the video. I've tried using this code just to get an alert box but it doesn't do anything (the code is at the end of the page, before the closing </body> tag): <script type="text/javascript"> $f("player","global/js/flowplayer/flowplayer-3.1.5.swf",{ onStart: function(clip) { alert('player started'); } }); </script> The Flowplayer is implemented as part of the html5media library, which automatically detects whether the browser can handle the <video> tag; if not, it replaces the anchor tag below with the Flowplayer: <video id="vid" width="372" height="209" poster="global/vid/bbq-poster.jpg" controls preload> <source id="videoSource" src="global/vid/BBQ_Festival.mp4" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"' onerror="fallback(this.parentNode)" ></source> <a href="global/vid/BBQ_Festival.mp4" style="display:block;width:372px;height:209px; padding-left:5px; " id="player"> </a> </video> So the $f() function call is in a JavaScript block at the end of the page, to give the player a chance to load correctly. What am I doing wrong? Thanks.

    Read the article

  • Sharing runtime variables between files

    - by nightcracker
    I have a project with a few files that all include the header global.hpp. Those files want to share and update information that is relevant for the whole program during runtime (that data is gathered progressively during the program runs but the fields of data are known at compile-time). Now my idea was to use a struct like this: global.hpp #include <string> #ifndef _GLOBAL_SESSION_STRUCT #define _GLOBAL_SESSION_STRUCT struct session_struct { std::string username; std::string password; std::string hostname; unsigned short port; // more data fields as needed }; #endif extern struct session_struct session; main.cpp #include "global.hpp" struct session_struct session; int main(int argc, char* argv[]) { session.username = "user"; session.password = "secret"; session.hostname = "example.com"; session.port = 80; // other stuff, etc return 0; } Now every file that includes global.hpp can just read & write the fields of the session struct and easily share information. Is this the correct way to do this? NOTE: For this specific project no threading is used. But please (for future projects and other people reading) clarify in your answer how this (or your proposed) solution works when threaded. Also, for this example/project session variables are shared. But this should also apply to any other form of shared variables.

    Read the article

  • In Visual Studio (2008) is there a way to have a custom dependent file on another custom file?

    - by rball
    Instead of a *.cs code behind or beside I'd like to have a *.js file. I'm developing a MVC application an have no need for a code beside because I have controllers, but in certain cases it'd be nice to have a JavaScript code beside or some way to associate the file to the page it's being used on. I suppose I could just name them similarly, but I'm wanting to show the association if possible so there's no question about what the file is for. Typically what I'm talking about is within Visual Studio now under your Global.asax file you will have a plus sign to the left: + Global.asax Once you expand it you'll get - Global.asax Global.asax.cs I'd like the same thing to happen: + Home.spark - Home.spark Home.spark.js Updated: My existing csproj file has a path to the actual file, not sure if that's screwing it up. I've currently got: <ItemGroup> <Content Include="Views\User\Profile.spark.js"> <DependentUpon>Views\User\Profile.spark</DependentUpon> </Content> </ItemGroup> <ItemGroup> <Content Include="Views\User\Profile.spark" /> </ItemGroup> and it's simply just showing the files besides each other.

    Read the article

  • Why is Log4Net not creating log file in production?

    - by uriDium
    I am using VS2005, a website project, a web deployment project and Log4Net. I can use logging when I am developing locally. I can see the log files and everything is fine. When I build my website, (using the web deployment project), I use the deploy as a single DLL option. When I then check the locations of where my log files should be I cannot see any files. Is there a way to troubleshoot this. I don't think adding the debug value to the App Settings will help because I don't have a console because it is a website. EDIT I don't want the 150 rep to go to waste so one last time. I compared the internal trace from my dev environment to the trace from the production. My dev environment trace shows the call the Xml Configurator where the production one does not. I have code in the global.asax on application_start() method. I put debug code in there and it is getting called in dev but not in production. I think this is where the web deployment project is causing some issues. Does the global.asax get compiled into the single DLL? When I do a build in the deployment directory I see a global.compiled file. Must this go into the bin folder in production? Or is the global.asax code in the single DLL? Having both in the bin folder or the just the DLL didn't change anything.

    Read the article

  • Strange code behaviour?

    - by goldenmean
    Hi, I have a C code in which i have a structure declaration which has an array of int[576] declared in it. For some reason, i had to remove this array from the structure, So i replaced this array with a pointer as int *ptr; declared some global array of same type, somewhere else in the code, and initialized this pointer by assigning the global array to this pointer. So i did not have to change the way i was accessing this array, from other parts of my code. But it works fine/gives desired output when i have the array declared in the structure, but it gives junk output when i declare it as a pointer in the structure and assign a global array to this pointer, as a part of the pointer initialization. All this code is being run on MS-VC 6.0/Windows setup/Intel-x86. I tried below things: 1)Suspected structure padding/alignment but could not get any leads? If at all structure alignment could be a culprit how can i proceed to narrow it down and confirm it? 2) I have made sure that in both cases the array is initialized to some default values, say 0 before its first use, and its not being used before initialization. 3)I tried using global array as well as malloc based memory for this newly declared array. Same result, junk output. Am i missing something? How can i zero down the problem. Any pointers would be helpful. Thanks, -AD.

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • Can't call an object method. PHP reports variable undefined

    - by user1285697
    This is the weirdest bug! It is probably something silly, but I have no idea how to fix it. If anyone could help, I would be most grateful! I have three files, one is called items.php, another is called tableFunctions.php, and the third is called mysql.php. I use two global objects called 'mysql' and 'tableFunctions'. They are stored in the files 'mysql.php', and 'tableFunctions.php', respectively. In each file, I create an instance of its object, assigning it to the global variable $_mysql, or $_table. like this: In the file mysql.php: global $_mysql; $_mysql = new mysql(); In the file tableFunctions.php: global $_table; $_table = new tableFunctions(); Here's how it is supposed to work: The items.php file includes the tableFunctions.php file... Which in turn, needs the mysql.php file, so it includes it too. In the items.php file, I call the method getTable(), which is contained in the object tableFunctions.(and in the variable $_table.) Like this: $t = $_table->getTable('items'); The getTable function calls the method, arrayFromResult(), which is contained within in the object mysql.(and in the variable $_mysql.) Like this: $result = $_mysql->arrayFromResult($r); That's where I get the error. PHP says that the variable '$_mysql' is undefined, but I defined it in the 'mysql.php' file.(see above) I also included mysql.php with the following code: include_once 'mysql.php'; I have no idea what is wrong! If anyone can help that would be much appreciated. The source files can be downloaded with the following link: https://www.dropbox.com/sh/bjj2gyjsybym89r/YLxqyNvQdn

    Read the article

  • using IntentExatras with Alarm Manager

    - by Ashwin
    I want to know if this code will work(I cannot try it out right now. Moreover, I have a few doubts that have to be cleared). Intent intent = new Intent(context, AlarmReceiver.class); intent.putExtra("user",global.getUsername()); intent.puExtra("password",global.getPassword); PendingIntent sender = PendingIntent.getBroadcast(context, 192837, intent, PendingIntent.FLAG_UPDATE_CURRENT); // Get the AlarmManager service Log.v("inside log_run", "new service started"); AlarmManager am = (AlarmManager) getSystemService(ALARM_SERVICE); am.setRepeating(AlarmManager.RTC_WAKEUP, IMMEDIATELY,60000,sender); finish(); As you can see, this code starts an AlarmManager with setRepeating(). If you see the intent(actually the pending intent) passed on to the BroadcastReceiver, there are two extras that are passed on. These are global variables that live as long as the Application is running. But this AlarmManager is meant to be run in the background (that is application will be alive only for the first few calls of the o fthe alrmamanager to the broadcast recevier) My Question Will AlarmManager make a copy of the global variables(the username and password) and maintain this copy to be passed along with the intent? Because, these values will be used in the broadcast receiver.

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Error when Eclipse started and now my package explorer is empty!

    - by carpenteri
    Friends, Just a quick introduction, I'm currently learning Java, using a combination of the Head First Java book and Eclipse. Everything was going well until tonight! When I started up Eclipse tonight, I saw an error message which I didn't pay attention to (I know! I know!) and acknowledged after which the project explorer was empty where it used to contain my Head First project! After a quick "google" I found the workspace.metadata.log and the errors are shown below. The version of Eclipse I am using is: 20100218-1602 and the only plugin that I use is egit. Any help would be much appreciated. Thanks !SESSION 2010-06-08 19:24:33.841 ----------------------------------------------- eclipse.buildId=unknown java.version=1.5.0_22 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_GB Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.ui.workbench 4 2 2010-06-08 19:24:36.475 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench". !STACK 1 org.eclipse.ui.WorkbenchException: Content is not allowed in prolog. at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:121) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) ... 6 more !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:41.442 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'OpenTypeHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 6 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:41.442 !MESSAGE Problems reading information from XML 'OpenTypeHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:50.435 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'QualifiedTypeNameHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 25 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:50.435 !MESSAGE Problems reading information from XML 'QualifiedTypeNameHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311)

    Read the article

  • CCNet and .Net 4.0

    - by Pino
    Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format We have just upgraded one of our projects to .Net 4.0 and have re-configured our build server (Cruise Control .Net). Anyone help with the above error? I have included more trace below. > debug: > > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.website\web.config'. > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.admin\web.config'. > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.website\bin\Intelligencia.UrlRewriter.dll'. > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.website\bin\UrlRewritingNet.UrlRewriter.dll'. > [exec] Microsoft (R) Build Engine Version 4.0.30319.1 > [exec] [Microsoft .NET Framework, Version 4.0.30319.1] > [exec] Copyright (C) Microsoft Corporation 2007. All rights > reserved. > [exec] > [exec] Build started 11/05/2010 12:02:16. > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > on node 1 (default targets). > [exec] ValidateSolutionConfiguration: > [exec] Building solution configuration "Debug|Any CPU". > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (1) is building > "C:\Builds\SilverChip\Working\sc.lib\sc.lib.csproj" > (2) on node 1 (default targets). > [exec] CoreCompile: > [exec] Skipping target "CoreCompile" because all output files > are up-to-date with respect to the > input files. > [exec] _CopyAppConfigFile: > [exec] Skipping target "_CopyAppConfigFile" because all > output files are up-to-date with > respect to the input files. > [exec] CopyFilesToOutputDirectory: > [exec] sc.lib -> C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.dll > [exec] Done Building Project "C:\Builds\SilverChip\Working\sc.lib\sc.lib.csproj" > (default targets). > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (1) is building > "C:\Builds\SilverChip\Working\sc_admin.metaproj" > (3) on node 1 (default targets). > [exec] Build: > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.dll" > to "sc.admin\\Bin\sc.lib.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\SubSonic.Core.dll" > to "sc.admin\\Bin\SubSonic.Core.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.pdb" > to "sc.admin\\Bin\sc.lib.pdb". > [exec] C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe > -v /sc.admin -p sc.admin\ -u -f -d PrecompiledWeb\sc.admin\ > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file or > assembly 'System.Data' or one of its > dependencies. An attempt was made to > load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_admin.metaproj] > [exec] Done Building Project "C:\Builds\SilverChip\Working\sc_admin.metaproj" > (default targets) -- FAILED. > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (1) is building > "C:\Builds\SilverChip\Working\sc_website.metaproj" > (4) on node 1 (default targets). > [exec] Build: > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.dll" > to "sc.website\\Bin\sc.lib.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\SubSonic.Core.dll" > to > "sc.website\\Bin\SubSonic.Core.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.pdb" > to "sc.website\\Bin\sc.lib.pdb". > [exec] C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe > -v /sc.website -p sc.website\ -u -f -d PrecompiledWeb\sc.website\ > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file or > assembly 'System.Data' or one of its > dependencies. An attempt was made to > load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_website.metaproj] > [exec] Done Building Project "C:\Builds\SilverChip\Working\sc_website.metaproj" > (default targets) -- FAILED. > [exec] Done Building Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (default targets) -- FAILED. > [exec] > [exec] Build FAILED. > [exec] > [exec] "C:\Builds\SilverChip\Working\Silverchip.sln" > (default target) (1) -> > [exec] "C:\Builds\SilverChip\Working\sc_admin.metaproj" > (default target) (3) -> > [exec] (Build target) -> > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file > or assembly 'System.Data' or one of > its dependencies. An attempt was made > to load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_admin.metaproj] > [exec] > [exec] > [exec] "C:\Builds\SilverChip\Working\Silverchip.sln" > (default target) (1) -> > [exec] "C:\Builds\SilverChip\Working\sc_website.metaproj" > (default target) (4) -> > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file > or assembly 'System.Data' or one of > its dependencies. An attempt was made > to load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_website.metaproj] > [exec] > [exec] 0 Warning(s) > [exec] 2 Error(s) > [exec]

    Read the article

  • Know more about shared pool subpool

    - by Liu Maclean(???)
    ????T.askmaclean.com???Shared Pool?SubPool?????,????????_kghdsidx_count ? subpool ??subpool????( ???duration)???: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> set linesize 200 pagesize 1400 SQL> show parameter kgh NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ _kghdsidx_count                      integer                          7 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11783.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11783.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036110 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x60036110 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f938 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f938 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049160 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049160 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052988 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052988 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c1b0 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c1b0 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659d8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659d8 HEAP DUMP heap name="sga heap(7,0)"  desc=0x6006f200 FIVE LARGEST SUB HEAPS for heap name="sga heap(7,0)"   desc=0x6006f200 SQL> alter system set "_kghdsidx_count"=6 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  859832320 bytes Fixed Size                  2100104 bytes Variable Size             746587256 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11908.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11908.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360f0 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x600360f0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f918 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f918 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049140 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049140 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052968 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052968 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c190 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c190 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659b8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659b8 SQL> SQL> alter system set "_kghdsidx_count"=2 scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12003.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12003.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d SQL> alter system set cpu_count=16 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL>  oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12065.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12065.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d8 SQL> show parameter sga_target NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ sga_target                           big integer                      0 SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL>  oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12148.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12148.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(1,1)"  desc=0x60037ee8 HEAP DUMP heap name="sga heap(1,2)"  desc=0x60039740 HEAP DUMP heap name="sga heap(1,3)"  desc=0x6003af98 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 HEAP DUMP heap name="sga heap(2,1)"  desc=0x60041710 HEAP DUMP heap name="sga heap(2,2)"  desc=0x60042f68 _enable_shared_pool_durations:?????????10g????shared pool duration??,?????sga_target?0?????false; ???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? SQL> alter system set "_enable_shared_pool_durations"=false scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12233.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options\ [oracle@vrh8 dbs]$ grep "sga heap"   /s01/admin/G10R25/udump/g10r25_ora_12233.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 ??:1._kghdsidx_count ??? shared pool subpool???, _kghdsidx_count???????7 ??? 7? shared pool subpool 2.??????? subpool???4? sub partition ?: sga heap(1,0) sga heap(1,1) sga heap(1,2) sga heap(1,3) ????? cpu??? ?????_kghdsidx_count, ???? ?10g ?AUTO SGA ??? shared pool duration???, duration ??4?: Session duration Instance duration (never freed) Execution duration (freed fastest) Free memory ??? shared pool duration???? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) reference : http://www.oracledatabase12g.com/archives/understanding-automatic-sga-memory-management.html

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Problem converting FBX file into XNB

    - by Dado
    I create a Monogame Content Project to convert assets into XNB. For FBX file without texture there is no problem: the file is correctly converted and when I load XNB into my project everything is ok. The problem occours when i have associated to fbx file a texture map: in this case both FBX and PNG files are converted to XNB but when i try to load these XNB files into my project the following problem occours: "ContentLoadException: Could not load Models/maze1 asset as a non-content file!" Note: maze1 is the XNB file that was converted from FBX. How can I solve this problem? Thank you in advance

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >