Search Results

Search found 2071 results on 83 pages for 'parrot owner'.

Page 75/83 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Anyone else experiencing high rates of Linux server crashes during a leap second day?

    - by Bron Gondwana
    POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine. Starting back ntp everywhere. If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. Give that a go if it matters to you. For the systems I run, the jump isn't a problem. Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex. I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex - it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl, it will be used if the system one isn't present. Obviously if you don't have squeeze 64 bit... find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter.

    Read the article

  • windows 2008 R2 TS printer security - can't take owership

    - by Ian
    I have a Windows 2008 R2 server with Terminal server role installed. I'm seeing a problem with an ordinary user who is member of local printer operators group on the server. If the user opens a cmd window using ‘run as administrator’ they can run printmanager.msc without needing to enter their password again. In printmanager they can change the ownership of redirected (easy print) printers without problems. If, from the same cmd window, they use subinacl to try and change the onwership of the queue to themselves they get access denied: >subinacl.exe /printer "_#MyPrinter (2 redirected)" /setowner="MyDom\MyUsr" Elapsed Time: 00 00:00:00 Done: 1, Modified 0, Failed 1, Syntax errors 0 Last Done : _#MyPrinter (2 redirected) Last Failed: _#MyPrinter (2 redirected) - OpenPrinter Error : 5 Access denied so, same context, same action but one works and one doesn't. Any ideas for this odd behaviour? I'm using subinacl x86 on an x64 server as I can't find anything more up to date. I've tried with icacls and others but couldn't get them to do anything with printers. EDIT: added after Gregs comments regarding setacl below If I log into the TS server as Testusr and open Admin Tools Printer Admin (as administrator) and then type mydomain\testusr and the testusr's password, then I can change the ownership of the printer queue and set testusr as the owner. However if I open cmd as administrator and, again, type mydomain\testusr and the users password when I try to change the ownership of my redirected printer I get the following: C:\>setacl -on "Bullzip PDF Printer (12 redireccionado)" -ot prn -actn setowner -ownr n:mydom\testusr WARNING: Privilege 'Back up files and directories' could not be enabled. SetACL's powers are restricted. WARNING: Privilege 'Restore files and directories' could not be enabled. SetACL's powers are restricted. INFORMATION: Processing ACL of: <Bullzip PDF Printer (12 redireccionado)> ERROR: Enabling the privilege SeTakeOwnershipPrivilege failed with: No todos los privilegios o grupos a los que se hace referencia son asignados al llamador. [meaning not all referenced privs or groups are assigned to the caller] SetACL finished with error(s): SetACL error message: A privilege could not be enabled maybe I'm getting something wrong but if the built in windows tool can do it with just membership of the 'print operators' group then setacl should be able to as well, no? However setacl seems to depend on other privileges, which in reality are not required to do this.

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • Some questions regarding Hostname

    - by user481913
    I just bought a new VPS hosting plan and i have a few questions. Hope someone here can clear the doubts for me. 1) Is it necessary to have a real domain for a vps hostname? I suppose i can just use a non-real domain like anydomain.com and something like 'server' for the computer name. Therefore i'll end up with something like server.anydomain.com as the vps's hostname. I want to do this for the sake of putting in a hostname to configure the vps to get it going . So, since this non-real domain name does not need to be publicly accessible i don't need to register or own it and instead access the server by the ip address. Is that correct? But i suppose that this also depends upon if my web host allows that? 2)I would also like to run some real sites with real domain names on this vps, so can i just configure the zone file on the primary nameserver and make entries for these domains and point an A record at the Vps's IP to make them publicly accessible over the internet? For example for my 1st domain i could make an entry like this: $TTL 86400 mydomain1.com. IN SOA ns1.mywebhost.com. \ admin.mydomain1.com. ( 2004011522 ; Serial no., based on date 21600 ; Refresh after 6 hours 3600 ; Retry after 1 hour 604800 ; Expire after 7 days 3600 ; Minimum TTL of 1 hour ) server IN A 200._._._ ns1.mywebhost.com. IN A 216._._._ ns2.mywebhost.com. IN A 205._._._ @ IN NS ns1.mywebhost.com. @ IN NS ns2.mywebhost.com. @ IN MX 10 server www IN CNAME server server IN CNAME @ (so this particular line tells the nameserver to point the url mydomain1.com to server.anydomain.com at the particular ip addresss in the A record.... is that right?) Similarly for my 2nd domain i could have a similar entry : $TTL 86400 mydomain2.com. IN SOA ns1.mywebhost.com. \ admin.mydomain2.com. (..... ............................so on........ ......................................... ......................................... ......................................... ......................................... ......................................... Is that correct? 3) Suppose for my vps hostname, i ignorantly chose a domain that someone else alreadys owns , however i think that it won't affect the public accessibility of the real domain or website since only the real owner of the domain has the rights to provide for the nameservers addresses in the TLD registeries through his Domian Registerar? Is that correct? 4)Can i change my vps's hostname later? Would this create any complications?

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Puppet and launchd services?

    - by Joel Westberg
    We have a production environment configured with Puppet, and want to be able to set up a similar environment on our development machines: a mix of Red Hats, Ubuntus and OSX. As might be expected, OSX is the odd man out here, and sadly, I'm having a lot of trouble with getting this to work. My first attempt was using macports, using the following declaration: package { 'rabbitmq-server': ensure => installed, provider => macports, } but this, sadly, generates the following error: Error: /Stage[main]/Rabbitmq/Package[rabbitmq-server]: Could not evaluate: Execution of '/opt/local/bin/port -q installed rabbitmq-server' returned 1: usage: cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-d delim] [file ...] while executing "exec dscl -q . -read /Users/$env(SUDO_USER) NFSHomeDirectory | cut -d ' ' -f 2" (procedure "mportinit" line 95) invoked from within "mportinit ui_options global_options global_variations" Next up, I figured I'd give homebrew a try. There is no package provider available by default, but puppet-homebrew seemed promising. Here, I got much farther, and actually managed to get the install to work. package { 'rabbitmq': ensure => installed, provider => brew, } file { "plist": path => "/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist", source => "/usr/local/opt/rabbitmq/homebrew.mxcl.rabbitmq.plist", ensure => present, owner => root, group => wheel, mode => 0644, } service { "homebrew.mxcl.rabbitmq": enable => true, ensure => running, provider => "launchd", require => [ File["/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist"] ], } Here, I don't get any error. But RabbitMQ doesn't start either (as it does if I do a manual load with launchctl) [... snip ...] Debug: Executing '/bin/launchctl list' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /var/db/launchd.db/com.apple.launchd/overrides.plist' Debug: /Schedule[weekly]: Skipping device resources because running on a host Debug: /Schedule[puppet]: Skipping device resources because running on a host Debug: Finishing transaction 2248294820 Debug: Storing state Debug: Stored state in 0.01 seconds Finished catalog run in 25.90 seconds What am I doing wrong?

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Apache will not stop/start gracefully

    - by ddjammin
    CentOs 6 64bit running apache 2.2.15-29.el6.centos. When I try to stop/start or restart httpd I get an error that says it has failed. A tail of the error log is below. I also noticed that a httpd.pid file is not created even though it is configured in the main conf file. If I set selinux to permissive, it works just fine. I do not want to run it with selinux disabled. If I delete the SSL_Mutex file it will start. HTTPD was running fine until I tried to add the ssl configuration. I copied over the ssl.conf file from a working server into the conf.d folder. I also copied a sslcert folder into the conf folder. It contains the certs, key, csr and password file. I think the problem has to do with the selinux context for the sslcert folder that was copied but I am not certain and not sure how to fix it. Below is the security context for the sslcert folder after executing restorecon -R sslcert ls -Z -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 httpd.conf -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 magic **drwxr-xr-x. root root system_u:object_r:httpd_config_t:s0 sslcert** tail -f /var/log/httpd/error_log [Thu Oct 17 13:33:19 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 13:33:20 2013] [notice] Digest: generating secret for digest authentication ... [Thu Oct 17 13:33:20 2013] [notice] Digest: done [Thu Oct 17 13:33:20 2013] [warn] pid file /etc/httpd/logs/ssl.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Oct 17 13:33:20 2013] [notice] Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Thu Oct 17 21:04:48 2013] [notice] caught SIGTERM, shutting down [Thu Oct 17 21:06:42 2013] [notice] **SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0** [Thu Oct 17 21:06:42 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 21:06:42 2013] [error] (17)File exists: Cannot create SSLMutex with file `/etc/httpd/logs/ssl_mutex' I also saw mention of possible issues with semaphores. Below is the output of the current semaphores and apache is currently not running. ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 0x00000000 65537 root 600 1 Finally selinux reports the following error. `sealert -a /var/log/audit/audit.log` 0% donetype=AVC msg=audit(1382034755.118:420400): avc: denied { write } for pid=3393 comm="httpd" name="ssl_mutex" dev=dm-0 ino=9513484 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:httpd_log_t:s0 tclass=file **** Invalid AVC allowed in current policy *** 100% doneERROR: failed to read complete file, 1044649 bytes read out of total 1043317 bytes (/var/log/audit/audit.log) found 1 alerts in /var/log/audit/audit.log -------------------------------------------------------------------------------- SELinux is preventing /usr/sbin/httpd from remove_name access on the directory ssl_mutex.

    Read the article

  • Segfaulting Java process

    - by zenmonkey
    I've a java process that is working on some large data set in memory. I've seen it crash with a SIGSEGV signal sometimes, so i was wondering some potential causes and fixes could do. Caues: - JVM bug - Native library bug (e.g pthreads etc) - JNI bug in user code Fixes: - Upgrade to new JVM In my particular case, this is the output form the log file (pruned) A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0x00002aaaaacd1b94, pid=32116, tid=1086544208 # JRE version: 6.0_14-b08 Java VM: Java HotSpot(TM) 64-Bit Server VM (14.0-b16 mixed mode linux-amd64 ) Problematic frame: C [libpthread.so.0+0xab94] pthread_cond_timedwait+0x154 # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00002aacaad41000): WatcherThread [stack: 0x0000000040b35000,0x0000000040c36000] [id=32141] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x00002aabc40008c0 Registers: RAX=0x0000000000000000, RBX=0x0000000000000000, RCX=0x0000000000000000, RDX=0x0000000000000002 RSP=0x0000000040c34cc0, RBP=0x0000000040c34d80, RSI=0x0000000000000001, RDI=0x00002aabc40008c0 R8 =0x00002aacaad42528, R9 =0x0000000000000000, R10=0x0000000040c34cd8, R11=0x0000000000000202 R12=0x0000000000000001, R13=0x0000000040c34d40, R14=0xffffffffffffff92, R15=0x00002aacaad42550 RIP=0x00002aaaaacd1b94, EFL=0x0000000000010246, CSGSFS=0x000000000000e033, ERR=0x0000000000000006 TRAPNO=0x000000000000000e Top of Stack: (sp=0x0000000040c34cc0) 0x0000000040c34cc0: 0000000000000000 00002aabc40008c0 0x0000000040c34cd0: 00002aacaad42528 0000000000000000 0x0000000040c34ce0: 0000000002fae0e0 0000000000000000 0x0000000040c34cf0: 00002aaaaacd1750 0000000040c34cc0 0x0000000040c34d00: 00002aacaad42528 0000000000000000 0x0000000040c34d10: 00002aacaad42528 00002aacaad42500 0x0000000040c34d20: 0000000000000032 00002aaaabadf876 0x0000000040c34d30: fffffffdaad40e80 0000000040c34d40 0x0000000040c34d40: 000000004bbb7166 0000000015f07098 0x0000000040c34d50: 0000000040c34d80 00138cd32df59cce 0x0000000040c34d60: 431bde82d7b634db 00002aacaad429c0 0x0000000040c34d70: 0000000000000032 00002aacaad429c0 0x0000000040c34d80: 0000000040c34e00 00002aaaabadda6d 0x0000000040c34d90: 0000000040c34da0 00002aacaad42500 0x0000000040c34da0: 00002aacaad429c0 00002aaa00000002 0x0000000040c34db0: 0000000000000001 0000000000000002 0x0000000040c34dc0: 0000000040c34dd0 00002aaaabb6f613 0x0000000040c34dd0: 0000000040c34e00 00002aacaad41000 0x0000000040c34de0: 0000000000000032 00002aacaad429c0 0x0000000040c34df0: 00002aacaad41000 0000000000001000 0x0000000040c34e00: 0000000040c34e60 00002aaaabbc39fb 0x0000000040c34e10: 0000000040c34e40 00002aaaabab868f 0x0000000040c34e20: 00002aacaad41000 00002aacaad42aa0 0x0000000040c34e30: 00002aacaad42aa0 00002aaaabe10630 0x0000000040c34e40: 00002aaaabe10630 00002aacaad42aa0 0x0000000040c34e50: 00002aacaad429c0 00002aacaad41000 0x0000000040c34e60: 0000000040c35130 00002aaaabadff9f 0x0000000040c34e70: 0000000000000000 0000000000000000 0x0000000040c34e80: 0000000000000000 0000000000000000 0x0000000040c34e90: 0000000000000000 0000000000000000 0x0000000040c34ea0: 0000000000000000 0000000000000000 0x0000000040c34eb0: 0000000000000000 0000000000000000 Instructions: (pc=0x00002aaaaacd1b94) 0x00002aaaaacd1b84: 88 22 00 00 48 8b 7c 24 08 be 01 00 00 00 31 c0 0x00002aaaaacd1b94: f0 0f b1 37 0f 85 e8 00 00 00 8b 57 2c 48 8b 47 Stack: [0x0000000040b35000,0x0000000040c36000], sp=0x0000000040c34cc0, free space=1023k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0xab94] pthread_cond_timedwait+0x154 V [libjvm.so+0x594a6d] V [libjvm.so+0x67a9fb] V [libjvm.so+0x596f9f] --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x00002aacaad3f000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=32140, stack(0x0000000040a34000,0x0000000040b35000)] 0x00002aacaad3c000 JavaThread "CompilerThread1" daemon [_thread_blocked, id=32139, stack(0x0000000040933000,0x0000000040a34000)] 0x00002aacaad37800 JavaThread "CompilerThread0" daemon [_thread_blocked, id=32138, stack(0x0000000040832000,0x0000000040933000)] 0x00002aacaad36800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=32137, stack(0x0000000040731000,0x0000000040832000)] 0x00002aacaab7d800 JavaThread "Finalizer" daemon [_thread_blocked, id=32136, stack(0x0000000040630000,0x0000000040731000)] 0x00002aacaab7b800 JavaThread "Reference Handler" daemon [_thread_blocked, id=32135, stack(0x000000004052f000,0x0000000040630000)] 0x0000000040115800 JavaThread "main" [_thread_blocked, id=32117, stack(0x000000004012b000,0x000000004022c000)] Other Threads: 0x00002aacaab75000 VMThread [stack: 0x000000004042e000,0x000000004052f000] [id=32134] =0x00002aacaad41000 WatcherThread [stack: 0x0000000040b35000,0x0000000040c36000] [id=32141] VM state:at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event]) [0x0000000040112e80] Threads_lock - owner thread: 0x00002aacaab75000 [0x0000000040113380] Heap_lock - owner thread: 0x0000000040115800 Heap PSYoungGen total 1854528K, used 1029248K [0x00002aac025a0000, 0x00002aaca8340000, 0x00002aaca9040000) eden space 1029248K, 100% used [0x00002aac025a0000,0x00002aac412c0000,0x00002aac412c0000) from space 825280K, 0% used [0x00002aac412c0000,0x00002aac412c0000,0x00002aac738b0000) to space 812800K, 0% used [0x00002aac76980000,0x00002aac76980000,0x00002aaca8340000) PSOldGen total 4423680K, used 4423651K [0x00002aaab5040000, 0x00002aabc3040000, 0x00002aac025a0000) object space 4423680K, 99% used [0x00002aaab5040000,0x00002aabc3038fe8,0x00002aabc3040000) PSPermGen total 21248K, used 5848K [0x00002aaaafc40000, 0x00002aaab1100000, 0x00002aaab5040000) object space 21248K, 27% used [0x00002aaaafc40000,0x00002aaab01f61f0,0x00002aaab1100000) Dynamic libraries: 40000000-40009000 r-xp 00000000 08:01 313415 /usr/java/jdk1.6.0_14/bin/java 40108000-4010a000 rwxp 00008000 08:01 313415 /usr/java/jdk1.6.0_14/bin/java 4010a000-4012b000 rwxp 4010a000 00:00 0 [heap] 4012b000-4012e000 ---p 4012b000 00:00 0 4012e000-4022c000 rwxp 4012e000 00:00 0 4022c000-4022d000 ---p 4022c000 00:00 0 4022d000-4032d000 rwxp 4022d000 00:00 0 4032d000-4032e000 ---p 4032d000 00:00 0 4032e000-4042e000 rwxp 4032e000 00:00 0 4042e000-4042f000 ---p 4042e000 00:00 0 4042f000-4052f000 rwxp 4042f000 00:00 0 4052f000-40532000 ---p 4052f000 00:00 0 40532000-40630000 rwxp 40532000 00:00 0 40630000-40633000 ---p 40630000 00:00 0 40633000-40731000 rwxp 40633000 00:00 0 40731000-40734000 ---p 40731000 00:00 0 40734000-40832000 rwxp 40734000 00:00 0 40832000-40835000 ---p 40832000 00:00 0 40835000-40933000 rwxp 40835000 00:00 0 40933000-40936000 ---p 40933000 00:00 0 40936000-40a34000 rwxp 40936000 00:00 0 40a34000-40a37000 ---p 40a34000 00:00 0 40a37000-40b35000 rwxp 40a37000 00:00 0 40b35000-40b36000 ---p 40b35000 00:00 0 40b36000-40c36000 rwxp 40b36000 00:00 0 2aaaaaaab000-2aaaaaac6000 r-xp 00000000 08:01 49198 /lib64/ld-2.7.so 2aaaaaac6000-2aaaaaac7000 rwxp 2aaaaaac6000 00:00 0 2aaaaaac7000-2aaaaaad0000 r-xs 0006d000 08:10 29851669 /mnt/home/jatten/workspace/common/build/lib/common.jar 2aaaaaad2000-2aaaaaad3000 rwxp 2aaaaaad2000 00:00 0 2aaaaaad3000-2aaaaaae0000 r-xp 00000000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaaae0000-2aaaaabdf000 ---p 0000d000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaabdf000-2aaaaabe2000 rwxp 0000c000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaabe2000-2aaaaac0a000 rwxp 2aaaaabe2000 00:00 0 2aaaaac0a000-2aaaaac0f000 r-xs 0003a000 08:10 30326840 /mnt/home/jatten/workspace/common_ml20010405/build/lib/common_ml.jar 2aaaaac0f000-2aaaaac12000 r-xs 00020000 08:10 29786222 /mnt/home/jatten/pagescorer.jar 2aaaaacc5000-2aaaaacc6000 r-xp 0001a000 08:01 49198 /lib64/ld-2.7.so 2aaaaacc6000-2aaaaacc7000 rwxp 0001b000 08:01 49198 /lib64/ld-2.7.so 2aaaaacc7000-2aaaaacdd000 r-xp 00000000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaacdd000-2aaaaaedc000 ---p 00016000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaedc000-2aaaaaedd000 r-xp 00015000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaedd000-2aaaaaede000 rwxp 00016000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaede000-2aaaaaee2000 rwxp 2aaaaaede000 00:00 0 2aaaaaee2000-2aaaaaee9000 r-xp 00000000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaaee9000-2aaaaafea000 ---p 00007000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaafea000-2aaaaafec000 rwxp 00008000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaafec000-2aaaaafee000 r-xp 00000000 08:01 49240 /lib64/libdl-2.7.so 2aaaaafee000-2aaaab1ee000 ---p 00002000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1ee000-2aaaab1ef000 r-xp 00002000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1ef000-2aaaab1f0000 rwxp 00003000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1f0000-2aaaab1f1000 rwxp 2aaaab1f0000 00:00 0 2aaaab1f1000-2aaaab33e000 r-xp 00000000 08:01 49219 /lib64/libc-2.7.so 2aaaab33e000-2aaaab53e000 ---p 0014d000 08:01 49219 /lib64/libc-2.7.so 2aaaab53e000-2aaaab542000 r-xp 0014d000 08:01 49219 /lib64/libc-2.7.so 2aaaab542000-2aaaab543000 rwxp 00151000 08:01 49219 /lib64/libc-2.7.so 2aaaab543000-2aaaab549000 rwxp 2aaaab543000 00:00 0 2aaaab549000-2aaaabca7000 r-xp 00000000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabca7000-2aaaabda6000 ---p 0075e000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabda6000-2aaaabf1e000 rwxp 0075d000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabf1e000-2aaaabf5c000 rwxp 2aaaabf1e000 00:00 0 2aaaabf67000-2aaaabfe9000 r-xp 00000000 08:01 49263 /lib64/libm-2.7.so 2aaaabfe9000-2aaaac1e8000 ---p 00082000 08:01 49263 /lib64/libm-2.7.so 2aaaac1e8000-2aaaac1e9000 r-xp 00081000 08:01 49263 /lib64/libm-2.7.so 2aaaac1e9000-2aaaac1ea000 rwxp 00082000 08:01 49263 /lib64/libm-2.7.so 2aaaac1ea000-2aaaac1f2000 r-xp 00000000 08:01 49283 /lib64/librt-2.7.so 2aaaac1f2000-2aaaac3f1000 ---p 00008000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f1000-2aaaac3f2000 r-xp 00007000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f2000-2aaaac3f3000 rwxp 00008000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f3000-2aaaac41c000 r-xp 00000000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac41c000-2aaaac51b000 ---p 00029000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac51b000-2aaaac522000 rwxp 00028000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac522000-2aaaac523000 ---p 2aaaac522000 00:00 0 2aaaac523000-2aaaac524000 rwxp 2aaaac523000 00:00 0 2aaaac52d000-2aaaac542000 r-xp 00000000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac542000-2aaaac741000 ---p 00015000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac741000-2aaaac742000 r-xp 00014000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac742000-2aaaac743000 rwxp 00015000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac743000-2aaaac745000 rwxp 2aaaac743000 00:00 0 2aaaac745000-2aaaac74c000 r-xp 00000000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac74c000-2aaaac84d000 ---p 00007000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac84d000-2aaaac84f000 rwxp 00008000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac84f000-2aaaac850000 rwxp 2aaaac84f000 00:00 0 2aaaac850000-2aaaac858000 rwxs 00000000 08:01 229379 /tmp/hsperfdata_jatten/32116 2aaaac85b000-2aaaac865000 r-xp 00000000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaac865000-2aaaaca64000 ---p 0000a000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca64000-2aaaaca65000 r-xp 00009000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca65000-2aaaaca66000 rwxp 0000a000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca66000-2aaaaca74000 r-xp 00000000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaaca74000-2aaaacb76000 ---p 0000e000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaacb76000-2aaaacb79000 rwxp 00010000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaacb79000-2aaaacdea000 rwxp 2aaaacb79000 00:00 0 2aaaacdea000-2aaaafb7a000 rwxp 2aaaacdea000 00:00 0 2aaaafb7a000-2aaaafb84000 rwxp 2aaaafb7a000 00:00 0 2aaaafb84000-2aaaafc3a000 rwxp 2aaaafb84000 00:00 0 2aaaafc40000-2aaab1100000 rwxp 2aaaafc40000 00:00 0 2aaab1100000-2aaab5040000 rwxp 2aaab1100000 00:00 0 2aaab5040000-2aabc3040000 rwxp 2aaab5040000 00:00 0 2aac025a0000-2aaca8340000 rwxp 2aac025a0000 00:00 0 2aaca8340000-2aaca9040000 rwxp 2aaca8340000 00:00 0 2aaca9040000-2aaca904b000 rwxp 2aaca9040000 00:00 0 2aaca904b000-2aaca906a000 rwxp 2aaca904b000 00:00 0 2aaca906a000-2aaca98da000 rwxp 2aaca906a000 00:00 0 2aaca98da000-2aaca9ad4000 rwxp 2aaca98da000 00:00 0 2aaca9ad4000-2aacaa004000 rwxp 2aaca9ad4000 00:00 0 2aacaa004000-2aacaa00a000 rwxp 2aacaa004000 00:00 0 2aacaa00a000-2aacaa87b000 rwxp 2aacaa00a000 00:00 0 2aacaa87b000-2aacaaa76000 rwxp 2aacaa87b000 00:00 0 2aacaaa76000-2aacaaa81000 rwxp 2aacaaa76000 00:00 0 2aacaaa81000-2aacaaaa0000 rwxp 2aacaaa81000 00:00 0 2aacaaaa0000-2aacaaba0000 rwxp 2aacaaaa0000 00:00 0 2aacaaba0000-2aacaad36000 r-xs 02fb1000 08:01 315318 /usr/java/jdk1.6.0_14/jre/lib/rt.jar 2aacaad36000-2aacaaf36000 rwxp 2aacaad36000 00:00 0 2aacaaf36000-2aacaaf49000 r-xp 00000000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacaaf49000-2aacab04a000 ---p 00013000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacab04a000-2aacab04d000 rwxp 00014000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacab058000-2aacab05c000 r-xp 00000000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab05c000-2aacab25b000 ---p 00004000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25b000-2aacab25c000 r-xp 00003000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25c000-2aacab25d000 rwxp 00004000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25d000-2aacab26e000 r-xp 00000000 08:01 49282 /lib64/libresolv-2.7.so 2aacab26e000-2aacab46e000 ---p 00011000 08:01 49282 /lib64/libresolv-2.7.so 2aacab46e000-2aacab46f000 r-xp 00011000 08:01 49282 /lib64/libresolv-2.7.so 2aacab46f000-2aacab470000 rwxp 00012000 08:01 49282 /lib64/libresolv-2.7.so 2aacab470000-2aacab572000 rwxp 2aacab470000 00:00 0 2aacab572000-2aacab57e000 r-xs 00081000 08:10 29851828 /mnt/home/jatten/workspace/common/lib/google-collect-1.0.jar 2aacab57e000-2aacab585000 r-xs 000aa000 08:10 29851946 /mnt/home/jatten/workspace/common/lib/mysql-connector-java-5.1.8-bin.jar 2aacab585000-2aacab58d000 r-xs 00028000 08:10 29851949 /mnt/home/jatten/workspace/common/lib/xml-apis.jar 2aacab58d000-2aacab591000 r-xs 0002f000 08:10 29851947 /mnt/home/jatten/workspace/common/lib/commons-beanutils-core-1.8.2.jar 2aacab591000-2aacab59e000 r-xs 0007f000 08:10 29851943 /mnt/home/jatten/workspace/common/lib/commons-collections-3.2.jar 2aacab59e000-2aacab5a3000 r-xs 00026000 08:10 29851942 /mnt/home/jatten/workspace/common/lib/httpcore-4.0.jar 2aacab5a3000-2aacab5a9000 r-xs 00030000 08:10 29851932 /mnt/home/jatten/workspace/common/lib/junit-dep-4.8.1.jar 2aacab5a9000-2aacab5ac000 r-xs 00011000 08:10 29851922 /mnt/home/jatten/workspace/common/lib/servlet.jar 2aacab5ac000-2aacab5ae000 r-xs 00009000 08:10 29851937 /mnt/home/jatten/workspace/common/lib/gsb.jar 2aacab5ae000-2aacab5b5000 r-xs 00059000 08:10 29851930 /mnt/home/jatten/workspace/common/lib/log4j-1.2.15.jar 2aacab5b5000-2aacab6b5000 rwxp 2aacab5b5000 00:00 0 2aacab6b5000-2aacab6b7000 r-xs 00009000 08:10 29851956 /mnt/home/jatten/workspace/common/lib/gsb-src.jar 2aacab6b7000-2aacab7b7000 rwxp 2aacab6b7000 00:00 0 2aacab7b7000-2aacab7cf000 r-xs 00115000 08:10 29851938 /mnt/home/jatten/workspace/common/lib/xercesImpl.jar 2aacab7cf000-2aacab7d1000 r-xs 00009000 08:10 29851957 /mnt/home/jatten/workspace/common/lib/velocity-tools-view-1.0.jar 2aacab7d1000-2aacab7d3000 r-xs 00009000 08:10 29851939 /mnt/home/jatten/workspace/common/lib/commons-cli-1.2.jar 2aacab7d3000-2aacab7d9000 r-xs 00034000 08:10 29851955 /mnt/home/jatten/workspace/common/lib/junit-4.8.1.jar 2aacab7d9000-2aacab7db000 r-xs 0000e000 08:10 29851917 /mnt/home/jatten/workspace/common/lib/jakarta-oro-2.0.8.jar 2aacab7db000-2aacab858000 r-xs 0031d000 08:10 29851916 /mnt/home/jatten/workspace/common/lib/poi-ooxml-schemas-3.6-20091214.jar 2aacab858000-2aacab85c000 r-xs 00028000 08:10 29851936 /mnt/home/jatten/workspace/common/lib/httpcore-nio-4.0.jar 2aacab85c000-2aacab85e000 r-xs 00005000 08:10 29851940 /mnt/home/jatten/workspace/common/lib/commons-beanutils-bean-collections-1.8.2.jar 2aacab85e000-2aacab864000 r-xs 00059000 08:10 29851919 /mnt/home/jatten/workspace/common/lib/mail-1.4.jar 2aacab864000-2aacab866000 r-xs 0000d000 08:10 29851950 /mnt/home/jatten/workspace/common/lib/commons-logging-1.1.1.jar 2aacab866000-2aacab86c000 r-xs 00045000 08:10 29851924 /mnt/home/jatten/workspace/common/lib/commons-httpclient-3.1.jar 2aacab86c000-2aacab877000 r-xs 00074000 08:10 29851931 /mnt/home/jatten/workspace/common/lib/velocity-dep-1.4.jar 2aacab877000-2aacab87f000 r-xs 00051000 08:10 29851954 /mnt/home/jatten/workspace/common/lib/velocity-1.4.jar 2aacab87f000-2aacab884000 r-xs 00034000 08:10 29851958 /mnt/home/jatten/workspace/common/lib/commons-beanutils-1.8.2.jar 2aacab884000-2aacab889000 r-xs 00048000 08:10 29851918 /mnt/home/jatten/workspace/common/lib/dom4j-1.6.1.jar 2aacab889000-2aacab8c6000 r-xs 0024f000 08:10 29851914 /mnt/home/jatten/workspace/common/lib/xmlbeans-2.3.0.jar 2aacab8c6000-2aacab8cb000 r-xs 00033000 08:10 29851929 /mnt/home/jatten/workspace/common/lib/xmemcached-1.2.3.jar 2aacab8cb000-2aacab8cd000 r-xs 00005000 08:10 29851928 /mnt/home/jatten/workspace/common/lib/org.hamcrest.core_1.1.0.v20090501071000.jar 2aacab8cd000-2aacab8d0000 r-xs 0000a000 08:10 29851944 /mnt/home/jatten/workspace/common/lib/persistence-api-1.0.jar 2aacab8d0000-2aacab8d6000 r-xs 0005f000 08:10 29851926 /mnt/home/jatten/workspace/common/lib/poi-ooxml-3.6-20091214.jar 2aacab8d6000-2aacab8d7000 r-xs 0002b000 08:10 29851951 /mnt/home/jatten/workspace/common/lib/maxmind.jar 2aacab8d7000-2aacab8d8000 r-xs 00002000 08:10 29851935 /mnt/home/jatten/workspace/common/lib/jackson-jaxrs-1.2.0.jar 2aacab8d8000-2aacab8d9000 r-xs 00002000 08:10 29851913 /mnt/home/jatten/workspace/common/lib/slf4j-log4j12-1.5.6.jar 2aacab8d9000-2aacab8dd000 r-xs 00025000 08:10 29851945 /mnt/home/jatten/workspace/common/lib/yanf4j-1.1.1.jar 2aacab8dd000-2aacab8df000 r-xs 00003000 08:10 29851952 /mnt/home/jatten/workspace/common/lib/clickstream-1.0.2.jar 2aacab8df000-2aacab8e1000 r-xs 00004000 08:10 29851953 /mnt/home/jatten/workspace/common/lib/slf4j-api-1.5.6.jar 2aacab8e1000-2aacab8e9000 r-xs 0004d000 08:10 29851920 /mnt/home/jatten/workspace/common/lib/jackson-mapper-asl-1.2.0.jar 2aacab8e9000-2aacab8ed000 r-xs 0001f000 08:10 29851925 /mnt/home/jatten/workspace/common/lib/jackson-core-asl-1.2.0.jar 2aacab8ed000-2aacab8f1000 r-xs 0001b000 08:10 29851912 /mnt/home/jatten/workspace/common/lib/oscache-2.3.jar 2aacab8f1000-2aacab90c000 r-xs 0015d000 08:10 29851927 /mnt/home/jatten/workspace/common/lib/poi-3.6-20091214.jar 2aacab90c000-2aacab911000 r-xs 00040000 08:10 29851831 /mnt/home/jatten/workspace/common/lib/commons-lang-2.5.jar 2aacab911000-2aacab914000 r-xs 00012000 08:10 29851923 /mnt/home/jatten/workspace/common/lib/jgooglesafebrowser-0.1a.2.jar 2aacab914000-2aacab918000 r-xs 00023000 08:10 29851933 /mnt/home/jatten/workspace/common/lib/gson-1.3.jar 2aacab918000-2aacabb18000 rwxp 2aacab918000 00:00 0 2aacabb82000-2aacabd82000 rwxp 2aacabb82000 00:00 0 2aacabe05000-2aacaf204000 rwxp 2aacabe05000 00:00 0 7fffaa12a000-7fffaa141000 rwxp 7fffaa12a000 00:00 0 [stack] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vdso] VM Arguments: jvm_args: -Xmx8000M java_command: com.scorers.ModelImplementingPageScorer -t data/data/golds/adult.all.json -b 18 -s data/models/pagetext.binary. adult.april6.all.model -m com.models.MultiClassUpdateableModel -p 30 --goldsilver -v --cat adult --fakeinput -e /mnt/tmp/xyz.15647.pageo bjects.txt -o Launcher Type: SUN_STANDARD Environment Variables: JAVA_HOME=/usr/java/jdk1.6.0_14 PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/jatten/bin LD_LIBRARY_PATH=/usr/java/jdk1.6.0_14/jre/lib/amd64/server:/usr/java/jdk1.6.0_14/jre/lib/amd64:/usr/java/jdk1.6.0_14/jre/../lib/amd64 SHELL=/bin/bash Signal Handlers: SIGSEGV: [libjvm.so+0x6bd980], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x6bd980], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGXFSZ: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGILL: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x597480], sa_mask[0]=0x00000000, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGTERM: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:Fedora release 8 (Werewolf) uname:Linux 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 libc:glibc 2.7 NPTL 2.7 rlimit: STACK 10240k, CORE 0k, NPROC 61504, NOFILE 1024, AS infinity load average:2.83 2.73 2.78 CPU:total 2 (4 cores per cpu, 1 threads per core) family 6 model 23 stepping 10, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1 Memory: 4k page, physical 7872040k(14540k free), swap 0k(0k free) vm_info: Java HotSpot(TM) 64-Bit Server VM (14.0-b16) for linux-amd64 JRE (1.6.0_14-b08), built on May 21 2009 01:11:11 by "java_re" with gcc 3.2.2 (SuSE Lin ux) [error occurred during error reporting (printing date and time), id 0xb]

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Add Zune Desktop Player to Windows 7 Media Center

    - by DigitalGeekery
    Are you a Zune owner who prefers the Zune player for media playback? Today we’ll show you how to integrate the Zune player with WMC using Media Center Studio. You’ll need to download Media Center Studio and the Zune Desktop player software. (See download links below) Also, make sure you have Media Center closed. Some of the actions in Media Center Studio cannot be performed while WMC is open. Open Media Center Studio and click on the Start Menu tab at the top of the application.   Click the Application button. Here we will create an Entry Point for the Zune player so that we can add it to Media Center. Type in a name for your entry point in the title text box. This is the name that will appear under the tile when added to the Media Center start menu. Next, type in the path to the Zune player. By default this should be C:\Program Files\Zune\Zune.exe. Note: Be sure to use the original path, not a link to the desktop icon.   The Active image is the image that will appear on the tile in Media Center. If you wish to change the default image, click the Browse button and select a different image. Select Stop the currently playing media from the When launched do the following: dropdown list.  Otherwise, if you open Zune player from WMC while playing another form of media, that media will continue to play in the background.   Now we will choose a keystroke to use to exit the Zune player software and return to Media Center. Click on the the green plus (+) button. When prompted, press a key to use to the close the Zune player. Note: This may also work with your Media Center remote. You may want to set a keyboard keystroke as well as a button on your remote to close the program. You may not be able to set certain remote buttons to close the application. We found that the back arrow button worked well. You can also choose a keystroke to kill the program if desired. Be sure to save your work before exiting by clicking the Save button on the Home tab.   Next, select the Start Menu tab and click on the next to Entry points to reveal the available entry points. Find the Zune player tile in the Entry points area. We want to drag the tile out onto one of the menu strips on the start menu. We will drag ours onto the Extras Library strip. When you begin to drag the tile, green plus (+) signs will appear in between the tiles. When you’ve dragged the tile over any of the green plus signs, the  red “Move” label will turn to a blue “Move to” label. Now you can drop the tile into position. Save your changes and then close Media Center Studio. When you open Media Center, you should see your Zune tile on the start menu. When you select the Zune tile in WMC, Media Center will be minimized and Zune player will be launched. Now you can enjoy your media through the Zune player. When you close Zune player with the previously assigned keystroke or by clicking the “X” at the top right, Windows Media Center will be re-opened. Conclusion We found the Zune player worked with two different Media Center remotes that we tested. It was a times a little tricky at times to tell where you were when navigating through the Zune software with a remote, but it did work. In addition to managing your music, the Zune player is a nice way to add podcasts to your Media Center setup. We should also mention that you don’t need to actually own a Zune to install and use the Zune player software. Media Center Studio works on both Vista and Windows 7. We covered Media Center Studio a bit more in depth in a previous post on customizing the Windows Media Center start menu. Are you new to Zune player? Familiarize yourself a bit more by checking out some of our earlier posts like how to update your Zune player, and experiencing your music a whole new way with Zune for PC.   Downloads Zune Desktop Player download Media Center Studio download Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • How do you use blog content?

    - by fatherjack
    Do you write a blog, have you ever thought about it? I think people fall into one of a few categories when it comes to blogs, especially blogs with technical content. Writing articles furiously - daily, twice daily and reading dozens of others. Writing the odd piece of content and read plenty of others' output. Started a blog once and its fizzled out but reading lots. Thought about starting a blog someday but never got around to it, hopping into the occasional blog when a link or a Tweet takes them there. Never thought about writing one but often catching content from them when Google (or other preferred search engine) finds content related to their search. Now I am not saying that either of these is right or wrong, nor am I saying that anyone should feel any compulsion to be in any particular category. What I would say is that you as a blog reader have the power to move blog writers from one category to another. How, you might ask? How do I have any power over a blog writer? It is very simple - feedback. If you give feedback then the blog writer knows that they are reaching an audience, if there is no response then they we are simply writing down our thoughts for what could amount to nothing more than a feeble amount of exercise and a few more key stokes towards the onset of RSI. Most blogs have a mechanism to alert the writer when there are comments, and personally speaking, if an email is received saying there has been a response to a blog article then there is a rush of enthusiasm, a moment of excitement that someone is actually reading and considering the text that was submitted and made available for the whole world to read. I am relatively new to this blog game and could be in some extended honeymoon period as I have also recently been incorporated into the Simple Talk 'stable'. I can understand that once you get to the "Dizzy Heights of Ozar" (www.brentozar.com) then getting comments and feedback might not be such a pleasure and may even be rather more of a chore but that, I guess, is the price of fame. For us mere mortals starting out blogging, getting feedback (or even at the moment for me, simply the hope of getting feedback) is what keeps it going. The hope that you will pick a topic that hasn't been done recently by Brad McGehee, Grant Fritchey,  Paul Randall, Thomas LaRock or any one of the dozen of rock star bloggers listed here or others from SQLServerPedia and so on, and then do it well enough to be found, reviewed, or <shudder> (re)tweeted to bring more visitors is what we are striving for, along with the fact that the content we might produce is something that will be of benefit to others. There is only so much point to typing content that no-one is reading and putting it on a blog. You may as well just write it in a diary. A technical blog is not like, say, a blog covering photography techniques where the way to frame and take a picture stands true whether it was written last week, last year or last century - technical content goes sour, quite quickly. There isn't much call for articles about yesterdays technology unless its something that still applies to current versions too, so some content written no more than 2 years ago isn't worth having now. The combination of a piece of content that you know is going to not last long and the fact that no-one reads it is a strong force against writing anything else. Getting feedback counters that despair and gives a value to writing something new. I would say that any feedback is good but there are obviously comments that are just so negative or otherwise badly phrased that they would hasten the demise of a blog but, in general most feedback will encourage a writer. It may not be a comment that supports or agrees with the main theme of a post but if it generates discussion or opens up a previously unexplored viewpoint it is contributing to the blog and is therefore encouraging to the writer. Even if you only say "thank you" before you leave a blog, having taken a section of script to use for yourself or having been given a few links to some content that has widened your knowledge it will be so welcome to the blog owner. Isn't it also the decent thing to do, acknowledging that you have benefited from another's efforts?

    Read the article

  • ASP.NET WebAPI Security 4: Examples for various Authentication Scenarios

    - by Your DisplayName here!
    The Thinktecture.IdentityModel.Http repository includes a number of samples for the various authentication scenarios. All the clients follow a basic pattern: Acquire client credential (a single token, multiple tokens, username/password). Call Service. The service simply enumerates the claims it finds on the request and returns them to the client. I won’t show that part of the code, but rather focus on the step 1 and 2. Basic Authentication This is the most basic (pun inteneded) scenario. My library contains a class that can create the Basic Authentication header value. Simply set username and password and you are good to go. var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new BasicAuthenticationHeaderValue("alice", "alice"); var response = client.GetAsync("identity").Result; response.EnsureSuccessStatusCode();   SAML Authentication To integrate a Web API with an existing enterprise identity provider like ADFS, you can use SAML tokens. This is certainly not the most efficient way of calling a “lightweight service” ;) But very useful if that’s what it takes to get the job done. private static string GetIdentityToken() {     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         _idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Bearer,         AppliesTo = new EndpointAddress(Constants.Realm)     };     var token = factory.CreateChannel().Issue(rst) as GenericXmlSecurityToken;     return token.TokenXml.OuterXml; } private static Identity CallService(string saml) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SAML", saml);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   SAML to SWT conversion using the Azure Access Control Service Another possible options for integrating SAML based identity providers is to use an intermediary service that allows converting the SAML token to the more compact SWT (Simple Web Token) format. This way you only need to roundtrip the SAML once and can use the SWT afterwards. The code for the conversion uses the ACS OAuth2 endpoint. The OAuth2Client class is part of my library. private static string GetServiceTokenOAuth2(string samlToken) {     var client = new OAuth2Client(_acsOAuth2Endpoint);     return client.RequestAccessTokenAssertion(         samlToken,         SecurityTokenTypes.Saml2TokenProfile11,         Constants.Realm).AccessToken; }   SWT Authentication When you have an identity provider that directly supports a (simple) web token, you can acquire the token directly without the conversion step. Thinktecture.IdentityServer e.g. supports the OAuth2 resource owner credential profile to issue SWT tokens. private static string GetIdentityToken() {     var client = new OAuth2Client(_oauth2Address);     var response = client.RequestAccessTokenUserName("bob", "abc!123", Constants.Realm);     return response.AccessToken; } private static Identity CallService(string swt) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   So you can see that it’s pretty straightforward to implement various authentication scenarios using WebAPI and my authentication library. Stay tuned for more client samples!

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • WSS 3.0 to SharePoint 2010: Tips for delaying the Visual Upgrade

    - by Kelly Jones
    My most recent project has been to migrate a bunch of sites from WSS 3.0 (SharePoint 2007) to SharePoint Server 2010.  The users are currently working with WSS 3.0 and Office 2003, so the new ribbon based UI in 2010 will be completely new.  My client wants to avoid the new SharePoint 2010 look and feel until they’ve had time to train their users, so we’ve been testing the upgrades by keeping them with the 2007 user interface. Permission to perform the Visual Upgrade One of the first things we noticed was the default permissions for who was allowed to switch the UI from 2007 to 2010.  By default, site collection administrators and site owners can do this.  Since we wanted to more tightly control the timing of the new UI, I added a few lines to the PowerShell script that we are using to perform the migration.  This script creates the web application, sets the User Policy, and then does a Mount-SPDatabase to attach the old 2007 content database to the 2010 farm.  I added the following steps after the Mount-SPDatabase step: #Remove the visual upgrade option for site owners # it remains for Site Collection administrators foreach ($sc in $WebApp.Sites){ foreach ($web in $sc.AllWebs){ #Visual Upgrade permissions for the site/subsite (web) $web.UIversionConfigurationEnabled = $false; $web.Update(); } } These script steps loop through each Site Collection in a particular web application ($WebApp) and then it loops through each subsite ($web) in the Site Collection ($sc) and disables the Site Owner’s permission to perform the Visual Upgrade. This is equivalent to going to the Site Collection administrator settings page –> Visual Upgrade and selecting “Hide Visual Upgrade”. Since only IT people have Site Collection administrator privileges, this will allow IT to control the timing of the new 2010 UI rollout. Newly created subsites Our next issue was brought to our attention by SharePoint Joel’s blog post last week (http://www.sharepointjoel.com/Lists/Posts/Post.aspx?ID=524 ).  In it, he lists some updates about the 2010 upgrade, and his fourth point was one that I hadn’t seen yet: 4. If a 2007 upgraded site has not been visually upgraded, the sites created underneath it will look like 2010 sites – While this is something I’ve been aware of, I think many don’t realize how this impacts common look and feel for master pages, and how it impacts good navigation and UI. As well depending on your patch level you may see hanging behavior in the list picker. The site and list creation Silverlight control in Internet Explorer is looking for resources that don’t exist in the galleries in the 2007 site, and hence it continues to spin and spin and eventually time out. The work around is to upgrade to SP1, or use Chrome or Firefox which won’t attempt to render the Silverlight control. When the root site collection is a 2007 site and has it’s set of galleries and the children are 2010 sites there is some strange behavior linked to the way that the galleries work and pull from the parent. Our production SharePoint 2010 Farm has SP1 installed, as well as the December 2011 Cumulative Update, so I think the “hanging behavior” he mentions won’t affect us. However, since we want to control the roll out of the UI, we are concerned that new subsites will have the 2010 look and feel, no matter what the parent site has. Ok, time to dust off my developer skills. I first looked into using feature stapling, but I couldn’t get that to work (although I’m pretty sure I had everything wired up correctly).  Then I stumbled upon SharePoint 2010’s web events – a great way to handle this. Using Visual Studio 2010, I created a new SharePoint project and added a Web Event Receiver: In the Event Receiver class, I used the WebProvisioned method to check if the parent site is a 2007 site (UIVersion = 3), and if so, then set the newly created site to 2007:   /// <summary> /// A site was provisioned. /// </summary> public override void WebProvisioned(SPWebEventProperties properties) { base.WebProvisioned(properties);   try { SPWeb curweb = properties.Web;   if (curweb.ParentWeb != null) {   //check if the parent website has the 2007 look and feel if (curweb.ParentWeb.UIVersion == 3) { //since parent site has 2007 look and feel // we'll apply that look and feel to the current web curweb.UIVersion = 3; curweb.Update(); } } } catch (Exception) { //TODO: Add logging for errors } }   This event is part of a Feature that is scoped to the Site Level (Site Collection).  I added a couple of lines to my migration PowerShell script to activate the Feature for any site collections that we migrate. Plan Going Forward The plan going forward is to perform the visual upgrade after the users for a particular site collection have gone through 2010 training. If we need to do several site collections at once, we’ll use a PowerShell script to loop through each site collection to update the sites to 2010.  If it’s just one or two, we’ll be using the “Update All Sites” button on the Visual Upgrade page for Site Collection Administrators. The custom code for newly created sites won’t need to be changed, since it relies on the UI version of the parent site.  If the parent is 2010, then the new site will look 2010.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >