Search Results

Search found 5586 results on 224 pages for 'global illumination'.

Page 42/224 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • EJB3 JNDI Lookup Failure in JEE application client

    - by Hank
    I'm trying to access an EJB3 from a JEE client-application, but keep getting nothing but lookup failures. My JEE Application 'CoreServer' is exposing a number of beans with remote interfaces. I have no problem accessing them from a Web Application deployed on the same Glassfish v3.0.1. Now I'm trying to access it from a client-application: public class Main { public static void main(String[] args) { CampaignControllerRemote bean = null; try { InitialContext ctx = new InitialContext(); bean = (CampaignControllerRemote) ctx.lookup("java:global/CoreServer/CampaignController"); } catch (Exception e) { System.out.println(e.getMessage()); } if (bean != null) { Campaign campaign = bean.get(361); if (campaign != null) { System.out.println("Got "+ campaign); } } } } When I run deploy it to Glassfish and run it from the appclient, I get this error: Lookup failed for 'java:global/CoreServer/CampaignController' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=localhost,orb'sInitialPort=3700 However, that's exactly the same JNDI-name I use when I lookup the bean from the WebApplication (via SessionContext, not InitialContext - does that matter?). Also, when I deploy 'CoreServer', Glassfish reports: Portable JNDI names for EJB CampaignController : [java:global/CoreServer/CampaignController!mvs.api.CampaignControllerRemote, java:global/CoreServer/CampaignController] Glassfish-specific (Non-portable) JNDI names for EJB CampaignController : [mvs.api.CampaignControllerRemote, mvs.api.CampaignControllerRemote#mvs.api.CampaignControllerRemote] I tried all four names, none worked. Is the appclient unable to access beans with (only) Remote interfaces?

    Read the article

  • Unable to change the value of the variable

    - by Legend
    I'm using a discrete event simulator called ns-2 that was built using Tcl and C++. I was trying to write some code in TCL: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 2.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global state global ns $ns at-now "set state [expr $state+1]" puts "Changed value of state to $state" } $ns run Here's the output: At 0.0 value of state is: 0 Changed value of state to 0 At 2.0 values of state is: 0 The value of state does not seem to change. I am not sure if I am doing something wrong in using TCL. Anyone has an idea as to what might be going wrong here? EDIT: Thanks for the help. Actually, ns-2 is something over which I do not have much control (unless I recompile the simulator itself). I tried out the suggestions and here's the output: for the code: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 9.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global ns set ::state [expr {$::state+1}] $ns at-now "puts \"At [$ns now] changed value of state to $::state\"" } $ns run the output is: At 0.0 value of state is: 0 At 1 changed value of state to 1 At 2.0 values of state is: 0 And for the code: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 9.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global ns set ::state [expr {$::state+1}] $ns at 1.0 {puts "At 1.0 values of state is: $::state"} } $ns run the output is: At 0.0 value of state is: 0 At 1.0 values of state is: 1 At 2.0 values of state is: 0 Doesn't seem to work... Not sure if its a problem with ns2 or my code...

    Read the article

  • Redirect to default action in Struts 2

    - by topher-j
    I have an action with an empty string for name defined in the root namespace, and I want to redirect to that action from another action if a certain result is found, but it doesn't seem to work. Here's the default action <action name="" class="com.example.actions.HomeAction"> <result name="success" type="freemarker">freemarker/home.ftl</result> </action> And I'm defining the redirect in the global-results for the package: <global-results> <result name="sendToRouting" type="redirectAction"> <param name="actionName"></param> <param name="namespace">/</param> </result> </global-results> I've tried taking out the actionName parameter, but that doesn't work. If I put a name in for the HomeAction and reference it by name in the global-results it works, so I'm assuming the problem is lack of action name for the redirect. Any thoughts?

    Read the article

  • How to implement "business rules" in Rails?

    - by Zabba
    What is the way to implement "business rules" in Rails? Let us say I have a car and want to sell it: car = Cars.find(24) car.sell car.sell method will check a few things: does current_user own the car? check: car.user_id == current_user.id is the car listed for sale in the sales catalog? check: car.catalogs.ids.include? car.id if all o.k. then car is marked as sold. I was thinking of creating a class called Rules: class Rules def initialize(user,car) @user = user @car = car end def can_sell_car? @car.user_id == @user.id && @car.catalogs.ids.include? @car.id end end And using it like this: def Car def sell if Rules.new(current_user,self).can_sell_car ..sell the car... else @error_message = "Cannot sell this car" nil end end end As for getting the current_user, I was thinking of storing it in a global variable? I think that whenever a controller action is called, it's always a "fresh" call right? If so then storing the current user as a global variable should not introduce any risks..(like some other user being able to access another user's details) Any insights are appreciated! UPDATE So, the global variable route is out! Thanks to PeterWong for pointing out that global variables persist! I've now thinking of using this way: class Rules def self.can_sell_car?(current_user, car) ......checks.... end end And then calling Rules.can_sell_car?(current_user,@car) from the controller action. Any thoughts on this new way?

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • How can I prevent default_environment variables from getting set by Capistrano's sudo action?

    - by Logan Koester
    My deploy.rb sets some environment variables to use the regular user's local Ruby rather than the system-wide one. set :default_environment, { :PATH => '/home/myapp/.rvm/bin:/home/myapp/.rvm/bin:/home/myapp/.rvm/rubies/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global/bin:/home/myapp/bin:/usr/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin:/usr/games', :RUBY_VERSION => 'ruby-1.9.1-p378', :GEM_HOME => '/home/myapp/.rvm/gems/ruby-1.9.1-p378', :GEM_PATH => '/home/myapp/.rvm/gems/ruby-1.9.1-p378:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global' } Naturally, when a task is using sudo, I would expect the system-wide ruby to be used instead. But it seems the environment variables are being set anyway, which is obviously invalid for the root user and returns an error: executing "sudo -p 'sudo password: ' /etc/init.d/god stop" servers: ["myapp.com"] [myapp.com] executing command command finished failed: "env PATH=/home/myapp/.rvm/bin:/home/myapp/.rvm/bin:/home/myapp/.rvm/rubies/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global/bin:/home/myapp/bin:/usr/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin:/usr/games RUBY_VERSION=ruby-1.9.1-p378 GEM_HOME=/home/myapp/.rvm/gems/ruby-1.9.1-p378 GEM_PATH=/home/myapp/.rvm/gems/ruby-1.9.1-p378:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global sh -c 'sudo -p '\\''sudo password: '\\'' /etc/init.d/god stop'" on myapp.com It makes no difference whether I use capistrano's sudo "system call" or the regular run "sudo system call". How can I avoid this?

    Read the article

  • How to access hidden template in unnamed namespace?

    - by Johannes Schaub - litb
    Here is a tricky situation, and i wonder what ways there are to solve it namespace { template <class T> struct Template { /* ... */ }; } typedef Template<int> Template; Sadly, the Template typedef interferes with the Template template in the unnamed namespace. When you try to do Template<float> in the global scope, the compiler raises an ambiguity error between the template name and the typedef name. You don't have control over either the template name or the typedef-name. Now I want to know whether it is possible to: Create an object of the typedefed type Template (i.e Template<int>) in the global namespace. Create an object of the type Template<float> in the global namespace. You are not allowed to add anything to the unnamed namespace. Everything should be done in the global namespace. This is out of curiosity because i was wondering what tricks there are for solving such an ambiguity. It's not a practical problem i hit during daily programming.

    Read the article

  • How do you detect when the Flowplayer has started playing the video?

    - by Alex
    Hi. I'm using the Flowplayer Flash video player to play MP4 videos inside an AnythingSlider. I need to detect when the user has clicked the start button on the video so to stop the slideshow and allow the user to view the video. I've tried using this code just to get an alert box but it doesn't do anything (the code is at the end of the page, before the closing </body> tag): <script type="text/javascript"> $f("player","global/js/flowplayer/flowplayer-3.1.5.swf",{ onStart: function(clip) { alert('player started'); } }); </script> The Flowplayer is implemented as part of the html5media library, which automatically detects whether the browser can handle the <video> tag; if not, it replaces the anchor tag below with the Flowplayer: <video id="vid" width="372" height="209" poster="global/vid/bbq-poster.jpg" controls preload> <source id="videoSource" src="global/vid/BBQ_Festival.mp4" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"' onerror="fallback(this.parentNode)" ></source> <a href="global/vid/BBQ_Festival.mp4" style="display:block;width:372px;height:209px; padding-left:5px; " id="player"> </a> </video> So the $f() function call is in a JavaScript block at the end of the page, to give the player a chance to load correctly. What am I doing wrong? Thanks.

    Read the article

  • In Visual Studio (2008) is there a way to have a custom dependent file on another custom file?

    - by rball
    Instead of a *.cs code behind or beside I'd like to have a *.js file. I'm developing a MVC application an have no need for a code beside because I have controllers, but in certain cases it'd be nice to have a JavaScript code beside or some way to associate the file to the page it's being used on. I suppose I could just name them similarly, but I'm wanting to show the association if possible so there's no question about what the file is for. Typically what I'm talking about is within Visual Studio now under your Global.asax file you will have a plus sign to the left: + Global.asax Once you expand it you'll get - Global.asax Global.asax.cs I'd like the same thing to happen: + Home.spark - Home.spark Home.spark.js Updated: My existing csproj file has a path to the actual file, not sure if that's screwing it up. I've currently got: <ItemGroup> <Content Include="Views\User\Profile.spark.js"> <DependentUpon>Views\User\Profile.spark</DependentUpon> </Content> </ItemGroup> <ItemGroup> <Content Include="Views\User\Profile.spark" /> </ItemGroup> and it's simply just showing the files besides each other.

    Read the article

  • Sharing runtime variables between files

    - by nightcracker
    I have a project with a few files that all include the header global.hpp. Those files want to share and update information that is relevant for the whole program during runtime (that data is gathered progressively during the program runs but the fields of data are known at compile-time). Now my idea was to use a struct like this: global.hpp #include <string> #ifndef _GLOBAL_SESSION_STRUCT #define _GLOBAL_SESSION_STRUCT struct session_struct { std::string username; std::string password; std::string hostname; unsigned short port; // more data fields as needed }; #endif extern struct session_struct session; main.cpp #include "global.hpp" struct session_struct session; int main(int argc, char* argv[]) { session.username = "user"; session.password = "secret"; session.hostname = "example.com"; session.port = 80; // other stuff, etc return 0; } Now every file that includes global.hpp can just read & write the fields of the session struct and easily share information. Is this the correct way to do this? NOTE: For this specific project no threading is used. But please (for future projects and other people reading) clarify in your answer how this (or your proposed) solution works when threaded. Also, for this example/project session variables are shared. But this should also apply to any other form of shared variables.

    Read the article

  • Why is Log4Net not creating log file in production?

    - by uriDium
    I am using VS2005, a website project, a web deployment project and Log4Net. I can use logging when I am developing locally. I can see the log files and everything is fine. When I build my website, (using the web deployment project), I use the deploy as a single DLL option. When I then check the locations of where my log files should be I cannot see any files. Is there a way to troubleshoot this. I don't think adding the debug value to the App Settings will help because I don't have a console because it is a website. EDIT I don't want the 150 rep to go to waste so one last time. I compared the internal trace from my dev environment to the trace from the production. My dev environment trace shows the call the Xml Configurator where the production one does not. I have code in the global.asax on application_start() method. I put debug code in there and it is getting called in dev but not in production. I think this is where the web deployment project is causing some issues. Does the global.asax get compiled into the single DLL? When I do a build in the deployment directory I see a global.compiled file. Must this go into the bin folder in production? Or is the global.asax code in the single DLL? Having both in the bin folder or the just the DLL didn't change anything.

    Read the article

  • Strange code behaviour?

    - by goldenmean
    Hi, I have a C code in which i have a structure declaration which has an array of int[576] declared in it. For some reason, i had to remove this array from the structure, So i replaced this array with a pointer as int *ptr; declared some global array of same type, somewhere else in the code, and initialized this pointer by assigning the global array to this pointer. So i did not have to change the way i was accessing this array, from other parts of my code. But it works fine/gives desired output when i have the array declared in the structure, but it gives junk output when i declare it as a pointer in the structure and assign a global array to this pointer, as a part of the pointer initialization. All this code is being run on MS-VC 6.0/Windows setup/Intel-x86. I tried below things: 1)Suspected structure padding/alignment but could not get any leads? If at all structure alignment could be a culprit how can i proceed to narrow it down and confirm it? 2) I have made sure that in both cases the array is initialized to some default values, say 0 before its first use, and its not being used before initialization. 3)I tried using global array as well as malloc based memory for this newly declared array. Same result, junk output. Am i missing something? How can i zero down the problem. Any pointers would be helpful. Thanks, -AD.

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • Can't call an object method. PHP reports variable undefined

    - by user1285697
    This is the weirdest bug! It is probably something silly, but I have no idea how to fix it. If anyone could help, I would be most grateful! I have three files, one is called items.php, another is called tableFunctions.php, and the third is called mysql.php. I use two global objects called 'mysql' and 'tableFunctions'. They are stored in the files 'mysql.php', and 'tableFunctions.php', respectively. In each file, I create an instance of its object, assigning it to the global variable $_mysql, or $_table. like this: In the file mysql.php: global $_mysql; $_mysql = new mysql(); In the file tableFunctions.php: global $_table; $_table = new tableFunctions(); Here's how it is supposed to work: The items.php file includes the tableFunctions.php file... Which in turn, needs the mysql.php file, so it includes it too. In the items.php file, I call the method getTable(), which is contained in the object tableFunctions.(and in the variable $_table.) Like this: $t = $_table->getTable('items'); The getTable function calls the method, arrayFromResult(), which is contained within in the object mysql.(and in the variable $_mysql.) Like this: $result = $_mysql->arrayFromResult($r); That's where I get the error. PHP says that the variable '$_mysql' is undefined, but I defined it in the 'mysql.php' file.(see above) I also included mysql.php with the following code: include_once 'mysql.php'; I have no idea what is wrong! If anyone can help that would be much appreciated. The source files can be downloaded with the following link: https://www.dropbox.com/sh/bjj2gyjsybym89r/YLxqyNvQdn

    Read the article

  • using IntentExatras with Alarm Manager

    - by Ashwin
    I want to know if this code will work(I cannot try it out right now. Moreover, I have a few doubts that have to be cleared). Intent intent = new Intent(context, AlarmReceiver.class); intent.putExtra("user",global.getUsername()); intent.puExtra("password",global.getPassword); PendingIntent sender = PendingIntent.getBroadcast(context, 192837, intent, PendingIntent.FLAG_UPDATE_CURRENT); // Get the AlarmManager service Log.v("inside log_run", "new service started"); AlarmManager am = (AlarmManager) getSystemService(ALARM_SERVICE); am.setRepeating(AlarmManager.RTC_WAKEUP, IMMEDIATELY,60000,sender); finish(); As you can see, this code starts an AlarmManager with setRepeating(). If you see the intent(actually the pending intent) passed on to the BroadcastReceiver, there are two extras that are passed on. These are global variables that live as long as the Application is running. But this AlarmManager is meant to be run in the background (that is application will be alive only for the first few calls of the o fthe alrmamanager to the broadcast recevier) My Question Will AlarmManager make a copy of the global variables(the username and password) and maintain this copy to be passed along with the intent? Because, these values will be used in the broadcast receiver.

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Know more about shared pool subpool

    - by Liu Maclean(???)
    ????T.askmaclean.com???Shared Pool?SubPool?????,????????_kghdsidx_count ? subpool ??subpool????( ???duration)???: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> set linesize 200 pagesize 1400 SQL> show parameter kgh NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ _kghdsidx_count                      integer                          7 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11783.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11783.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036110 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x60036110 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f938 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f938 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049160 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049160 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052988 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052988 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c1b0 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c1b0 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659d8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659d8 HEAP DUMP heap name="sga heap(7,0)"  desc=0x6006f200 FIVE LARGEST SUB HEAPS for heap name="sga heap(7,0)"   desc=0x6006f200 SQL> alter system set "_kghdsidx_count"=6 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  859832320 bytes Fixed Size                  2100104 bytes Variable Size             746587256 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11908.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11908.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360f0 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x600360f0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f918 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f918 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049140 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049140 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052968 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052968 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c190 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c190 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659b8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659b8 SQL> SQL> alter system set "_kghdsidx_count"=2 scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12003.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12003.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d SQL> alter system set cpu_count=16 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL>  oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12065.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12065.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d8 SQL> show parameter sga_target NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ sga_target                           big integer                      0 SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL>  oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12148.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12148.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(1,1)"  desc=0x60037ee8 HEAP DUMP heap name="sga heap(1,2)"  desc=0x60039740 HEAP DUMP heap name="sga heap(1,3)"  desc=0x6003af98 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 HEAP DUMP heap name="sga heap(2,1)"  desc=0x60041710 HEAP DUMP heap name="sga heap(2,2)"  desc=0x60042f68 _enable_shared_pool_durations:?????????10g????shared pool duration??,?????sga_target?0?????false; ???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? SQL> alter system set "_enable_shared_pool_durations"=false scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12233.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options\ [oracle@vrh8 dbs]$ grep "sga heap"   /s01/admin/G10R25/udump/g10r25_ora_12233.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 ??:1._kghdsidx_count ??? shared pool subpool???, _kghdsidx_count???????7 ??? 7? shared pool subpool 2.??????? subpool???4? sub partition ?: sga heap(1,0) sga heap(1,1) sga heap(1,2) sga heap(1,3) ????? cpu??? ?????_kghdsidx_count, ???? ?10g ?AUTO SGA ??? shared pool duration???, duration ??4?: Session duration Instance duration (never freed) Execution duration (freed fastest) Free memory ??? shared pool duration???? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) reference : http://www.oracledatabase12g.com/archives/understanding-automatic-sga-memory-management.html

    Read the article

  • Problems with DNS propagation 10 days after a change was made

    - by runlevel6
    The engineering team I work with has been in the process of moving equipment from one datacenter to another. Ten days ago we moved one of our name servers authoritative for our client's domains (ns1.faithhiway.com) and updated its IP address with its respective DNS provider (register.com) to point to the new datacenter. All tests done show that this name server is correctly running at its new location and when queried, returning the correct response for any domains it is responsible for. The problem is that well after 72 hours had gone by we were still seeing more DNS activity at its old IP address than at the new. The good news is that we kept a name server responding on the old IP address for the time being so we are not seeing any issues with the domains our nameserver is responsible for but the goal is to retire that as soon as possible. As you can see from WhatsMyDNS.net, a decent amount of propagation has occurred over the last 10 days since we made this change, but still there are some locations reporting our original IP. Considering that the TTL is only 3600 with the name servers responsible for this domain, it does not make any sense to myself or the other engineers working with me that we are having this issue. Now if I run a DNS check using one of the Register.com DNS servers (direct nameservers for faithhiway.com), I get the following (correct) result: # dig @dns01.gpn.register.com ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @dns01.gpn.register.com. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43232 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 5 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. ;; ADDITIONAL SECTION: dns01.gpn.register.com. 3600 IN A 98.124.192.1 dns02.gpn.register.com. 3600 IN A 98.124.197.1 dns03.gpn.register.com. 3600 IN A 98.124.193.1 dns04.gpn.register.com. 3600 IN A 69.64.145.225 dns05.gpn.register.com. 3600 IN A 98.124.196.1 ;; Query time: 50 msec ;; SERVER: 98.124.192.1#53(98.124.192.1) ;; WHEN: Thu Jan 27 15:16:57 2011 ;; MSG SIZE rcvd: 269 Just as a reference, here are the results when the same query is checked against a variety of Public DNS servers: Google: # dig @8.8.8.8 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @8.8.8.8. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12773 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 997 IN A 206.127.2.71 ;; Query time: 29 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Thu Jan 27 15:17:31 2011 ;; MSG SIZE rcvd: 52 Level 3: # dig @4.2.2.1 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @4.2.2.1. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46505 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 2623 IN A 206.127.2.71 ;; Query time: 7 msec ;; SERVER: 4.2.2.1#53(4.2.2.1) ;; WHEN: Thu Jan 27 15:18:35 2011 ;; MSG SIZE rcvd: 52 Verizon: # dig @151.197.0.38 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @151.197.0.38. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32658 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; Query time: 81 msec ;; SERVER: 151.197.0.38#53(151.197.0.38) ;; WHEN: Thu Jan 27 15:19:15 2011 ;; MSG SIZE rcvd: 52 Cisco: # dig @64.102.255.44 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @64.102.255.44. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39689 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. ;; Query time: 105 msec ;; SERVER: 64.102.255.44#53(64.102.255.44) ;; WHEN: Thu Jan 27 15:20:05 2011 ;; MSG SIZE rcvd: 165 OpenDNS: # dig @208.67.222.222 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @208.67.222.222. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12328 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169507 IN A 207.200.19.162 ;; Query time: 6 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Thu Jan 27 15:19:29 2011 ;; MSG SIZE rcvd: 52 SpeakEasy: # dig @66.93.87.2 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @66.93.87.2. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9342 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169323 IN A 207.200.19.162 ;; Query time: 69 msec ;; SERVER: 66.93.87.2#53(66.93.87.2) ;; WHEN: Thu Jan 27 15:19:51 2011 ;; MSG SIZE rcvd: 52 As you can see above, the majority of queries are returning the correct result. But a few (OpenDNS and SpeakEasy in the examples above) are still showing the old IP address. Considering the length of time that has gone by, it seems obvious to me that either we have made a mistake and not thoroughly handled the DNS changes on our end (likely) or there is a problem with either the DNS provider for this domain (Register) or with some of the DNS servers out in the wild (rather unlikely). Any advice on how I can proceed with this? UPDATE (January 31, 2011): First of all, I apologize for the length of both the original question and this update. I contemplated removing some of the excess from the original post but just in case this problem and its solution are helpful to someone else in the future I'm just going to leave everything as it is. Anyway, I've been doing some more research into this problem, and have discovered the following interesting occurrence. While running a check on the glue records for faithhiway.com always resolve correctly, if I go and check a client domain (where ns1.faithhiway.com is authoritative), I get a strange response. It looks like the root servers are returning nsX.faithhiway.com as their old IP addresses still (under Additional Section). Because we have a server still there responding to DNS queries, the trace finishes and returns the correct IP addresses as the final step (again, under Additional Section). The example below uses one of the domains that we use that uses ns1.faithhiway.com as its authoritative DNS server. # dig +trace +nosearch +all +norecurse ignitemail.com ; <<>> DiG 9.2.4 <<>> +trace +nosearch +all +norecurse ignitemail.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46856 ;; flags: qr ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;. IN NS ;; ANSWER SECTION: . 7986 IN NS a.root-servers.net. . 7986 IN NS b.root-servers.net. . 7986 IN NS c.root-servers.net. . 7986 IN NS d.root-servers.net. . 7986 IN NS e.root-servers.net. . 7986 IN NS f.root-servers.net. . 7986 IN NS g.root-servers.net. . 7986 IN NS h.root-servers.net. . 7986 IN NS i.root-servers.net. . 7986 IN NS j.root-servers.net. . 7986 IN NS k.root-servers.net. . 7986 IN NS l.root-servers.net. . 7986 IN NS m.root-servers.net. ;; Query time: 39 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 228 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16325 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. com. 172800 IN NS j.gtld-servers.net. ;; ADDITIONAL SECTION: a.gtld-servers.net. 172800 IN A 192.5.6.30 a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 b.gtld-servers.net. 172800 IN A 192.33.14.30 b.gtld-servers.net. 172800 IN AAAA 2001:503:231d::2:30 c.gtld-servers.net. 172800 IN A 192.26.92.30 d.gtld-servers.net. 172800 IN A 192.31.80.30 e.gtld-servers.net. 172800 IN A 192.12.94.30 f.gtld-servers.net. 172800 IN A 192.35.51.30 g.gtld-servers.net. 172800 IN A 192.42.93.30 h.gtld-servers.net. 172800 IN A 192.54.112.30 i.gtld-servers.net. 172800 IN A 192.43.172.30 j.gtld-servers.net. 172800 IN A 192.48.79.30 k.gtld-servers.net. 172800 IN A 192.52.178.30 l.gtld-servers.net. 172800 IN A 192.41.162.30 ;; Query time: 64 msec ;; SERVER: 198.41.0.4#53(a.root-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 504 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12860 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: ignitemail.com. 172800 IN NS ns1.faithhiway.com. ignitemail.com. 172800 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 172800 IN A 207.200.19.162 ns2.faithhiway.com. 172800 IN A 207.200.50.142 ;; Query time: 152 msec ;; SERVER: 192.54.112.30#53(h.gtld-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 111 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43016 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; ANSWER SECTION: ignitemail.com. 3600 IN A 206.127.2.64 ;; AUTHORITY SECTION: ignitemail.com. 3600 IN NS ns1.faithhiway.com. ignitemail.com. 3600 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 3600 IN A 206.127.2.71 ns2.faithhiway.com. 3600 IN A 206.127.2.72 ;; Query time: 25 msec ;; SERVER: 206.127.2.71#53(ns1.faithhiway.com) ;; WHEN: Mon Jan 31 09:22:18 2011 ;; MSG SIZE rcvd: 127 I really think this is a problem we have somewhere in our setup, but whether it is ignorance of something with DNS on my or my fellow engineer's end or just a dumb mistake we made, I have yet to find it.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >