Search Results

Search found 3679 results on 148 pages for 'definition'.

Page 101/148 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • HP Laserjet 1320 drivers for Windows 7

    - by RedGrittyBrick
    Background I replaced som XP computers with Win 7 computers. We have a HP Laserjet 1320dn printer attached to the LAN. The XP computers could print to it from Word in duplex mode without any issues. Problem The Win 7 computers downloaded a full set of drivers including a "HP Laserjet 1320 PCL5" driver. However using this, Word's page footers cause extra pages to be printed with just a letter or so at a time from the footer. Some other apps have similar issues. I also have an ancient Laserjet 1200 attached to the LAN via a Jetdirect box and that works just fine. So I don't think the problem is with Word (the computers came with Word 2010 starter). What I tried I wrangled the control-panel new printer dialogue into using a "HP Laserjet 2100 PS" driver for the HP Laserjet 1320dn. Now my Word documents print as they should. However I don't have a duplex option on the print dialogue. I'd really like to be able to use duplex printing. Question Windows used to have a Universal Postscript driver that read a Postscript Printer Definition (PPD) file to make the printer's features available (tray choice, duplex etc). I can't see any way to do this in Windows 7. Is there a way? Is there any other way I can get Windows 7 Home Premium 64-bit to print properly to a HP Laserjet 1320dn and have access to all it's major features? Addendum: My page in Word looks like this: +--------------------+ | aa bb cc | | | | lorem ipsum dolor | | | ... | | | pp qq rr | +--------------------+ The headers and footers were inserted using Insert - header - blank (3 column). When printed I get 1 page with aa, bb, cc, pp in correct position (no other text) 1 blank page 1 page with qq in correct position (no other text) 1 blank page 1 page with rr in correct position (no other text) 1 blank page 1 page with lorem ipsum dolor in correct position (no headers or footers) If I use a Laserjet 2100 driver I get 1 correct page.

    Read the article

  • No sound out of headphone port on laptop

    - by Thanatos
    I cannot get sound out of the headphone port on a laptop. Headphones are plugged in, and sound comes out of the internal speakers. Windows behaves normally (sound switches to headphones when headphones are inserted). It did work in Linux at one point, but something changed, we're just not sure what. Rebooting doesn't fix. This appears to occur whether or not PulseAudio is running. Things I've tried: Rebooting. No effect. Booting into Windows. It works properly, so probably not a hardware issue. All of alsamixer. My only controls are this: "Master" Volume bar & mutable, unmuted. Controls volume. "PCM" Volume bar only. 100%. "S/PDIF" Mutable only, currently muted, has no effect. "S/PDIF" Default PCM", Mutable only, currently unmuted, has no effect. Killing PulseAudio. No effect. (It also won't stay dead! Something appears to be restarting it, and I can't tell what, but it is annoying as fuck.) alsactl init 0, no effect. sudo rm -f /var/lib/alsa/asound.state, no effect. General system info: Ubuntu 10.04 LTS Toshiba Satellite T135D-S1324 lspci says I have: 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller Some edits: Yes, the headphones are in all the way. This works in Windows: You plug headphones in, the internal speakers stop making noise, and noise comes out the head phones. Windows says I only have two sound cards: the HDMI port (which I don't care about) and the "sound card", which it claims is a "Conexant Pebble High Definition SmartAudio" In Windows, both the internal speakers and the headphone jack show up as one soundcard, which in my experience, is typical. (This is a laptop)

    Read the article

  • Cheap Solution for Routing a Toll Free Number to a Standard POTS Number

    - by VxJasonxV
    I do some technical work for an Internet Radio Show/Podcast, and need to fix something that has been broken for a while. The hosts have a Skype-In number to take listener calls, and for convenience sake, I bought and paid for a toll free number for a period of time. I used to use Asterlink for routing calls, but they folded and sent my number to OneBox, but they're ridiculously expensive by comparison. I'm looking for a cheap solution for this one simple task. Forward toll free calls to a skype-in number. The definition of cheap is as cheap or cheaper than Asterlink was. I paid something like $2 a month, and then the termination/call rate, which was a fraction of a sent for termination, and only whole cents after some serious time on the call. A $20 preload lasted me months at a time. I don't want to be upsold too, I want a simple web based management screen (CDR/stats are fun!), and obviously, it needs to be reliable. What vendors out there are you a fan of that solves this need?

    Read the article

  • Custom Dreamweaver DocTypes

    - by Hugh Guiney
    Dreamweaver CS5 with Dreamweaver HTML5 Pack 1.2.7 Windows 7 x64 When I go to create a new document and select the HTML5 DocType, Dreamweaver gives me the legacy encoding/character set declaration: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> I want to replace it with the new, abbreviated style: <meta charset="utf-8"> The relevant file seems to be %ProgramFiles(x86)%\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\NewDocuments\Default.html, which has a blank charset, that is then apparently replaced with the appropriate character set dynamically: <meta http-equiv="Content-Type" content="text/html; charset="> I changed it, but then new documents show up like this: <meta charset=""> <title>Untitled Document</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> It seems Dreamweaver added the legacy declaration back in after my modification—and as far as I can tell, there's no way to specify that the charset definition should go in-between the quotes, either. Additionally, any modifications to Default.html apply to every DocType, whereas I only want this change to apply to the HTML5 DocType. Is there anything in the configuration files that would allow me to make any of these customizations? If not, is there an extension that does it?

    Read the article

  • Hearing a clicking noise from soundcard all the time

    - by Mehrdad
    I have installed Fedora 17 on my laptop. A few days ago I updated my fedora (but not upgraded). I shut down my computer and since the next time I turned it on I am hearing a clicking noise all the time from speakers. Even when I plug my headphones in I hear the noise through the headphone. I surfed over the internet and found the following shell commands: su -c 'echo "options snd_hda_intel power_save=0" /etc/modprobe.d/snd_hda_intel.conf' su -c 'echo 0 /sys/module/snd_hda_intel/parameters/power_save' I tried them but they didn't work. Here is the part of "lspci" command related to my sound-card: 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) I have to add that my sound-card is working and I can play some audio file, I mean I can hear the voice and noise simultaneously. But everything is OK in windows xp which is also installed on my laptop. Could it be related to the sound-card driver? If so, how can I revert it to the previous version?

    Read the article

  • Having two IP Routes/Gateways of last Resort on an HP Switch

    - by SteadH
    We have an HP Layer 3 Switch that is doing IP routing between vlans. The general set up is that the switch has an IP address on each VLAN and IP routing is enabled. On our servers VLAN, we have a firewall that has a connection to the outside world. To set a IP route on the HP router, we use IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.1 where 192.168.2.1 is the address of our firewall, and the zeros essentially mean to route all traffic that the switch doesn't know what to do with out the firewall as a gateway. We're in the middle of an ISP and firewall change. I set up the new firewall and ran the IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.254 (the address of the new firewall). Things started working nicely. When I reviewed the configuration of the switch though, I noticed that it did not replace the previous ip route command, but just added another route. Now, I know how to remove the old firewall route (no ip route 0.0.0.0 0.0.0.0 192.168.2.1), but what is the effect of having these two 0.0.0.0 routes? Is it switch implosion? Will a server just respond back over the route it receives the request from? I've read elsewhere that having two default gateways is an impossibility by definition, but I'm curious about this situation that our switch allowed. Thanks!

    Read the article

  • Set primary group of file or directory on Samba share from Windows

    - by Hubert Kario
    Short version: I have such situation on a Samba share: $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka Domain Users 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka Domain Users 0 Jan 11 17:07 file A How am I able to change this to following using only Windows SMB/CIFS client (using 3rd party applications is OK) $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka ntpoweruser 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka ntpoweruser 0 Jan 11 17:07 file A Rationale and background info I'm using POSIX ACLs on Samba shares. Together with acl group control for Samba, it allows me to delegate management of permissions to different users based on group membership. Thing is, when I create a new file on a Samba share, I'm unable to set its primary group (the one that grants permission to change its permissions). It's being set to my primary group (Domain Users) or group set using force group option in smb.conf share definition. Removing all groups in windows except the one I want to become the new primary group doesn't work. I can change it using chgrp group folder/ as regular user though shell, but it's suboptimal (not all users are *nix users). Trying to set new owner to group from Windows file permission window makes the Samba to return permission denied with following log entry: [2012/01/05 21:13:03.349734, 3] smbd/nttrans.c:1899(call_nt_transact_set_security_desc) call_nt_transact_set_security_desc: file = projects/project A/New folder, sent 0x1 [2012/01/05 21:13:03.349774, 3] smbd/posix_acls.c:1208(unpack_nt_owners) unpack_nt_owners: unable to validate owner sid for S-1-5-21-4526631811-884521863-452487935-11025 [2012/01/05 21:13:03.349804, 3] smbd/error.c:80(error_packet_set) error packet at smbd/nttrans.c(1909) cmd=160 (SMBnttrans) NT_STATUS_INVALID_OWNER The SID is correct and belongs to group I specified in GUI.

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • Puppet: variable overriding best practices

    - by rvs
    I'm wondering what are best practices for overriding variables in puppet. I'd like to have all my nodes (which located in different places, some of them are qa, some live) with same classes. Now I have something like this: class base_linux { <...> # something which requires $env and $relayhost variables } class linux_live { $relayhost="1.1.1.1" $env = "prod" include base_linux } class linux_qa { $relayhost="2.2.2.2" # override relayhost include base_linux } class linux_trunk { $env = "trunk" # override env inlude linux_qa } node "trunk01" { include linux_trunk include <something else> } node "trunk02" { $relayhost = "3.3.3.3" # override relayhost include linux_trunk include <something else> } node "prod01" { include linux_prod } So, I'd like to have some defaults in base_linux, which can be overrided by linux_qa/linux_live, which can be overrided in higher level classes and node definition. Of course, it does not work (and not expected to work). It does not work with class inheritance as well. Probably, I'll be able to archive using global scope variables, but it does not seems like a good idea to me. What is the best way to solve this problem?

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Sound doesn't work anymore after replacing RAM

    - by thejh
    Hello, today, I replaced one old RAM module with two newer, bigger ones, but now, the sound doesn't seem to work anymore. Already ran alsaconf and it didn't help. Output of lspci for the audio device: 00:07.0 Audio device: nVidia Corporation MCP67 High Definition Audio (rev a1) Subsystem: Giga-byte Technology Device a002 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 (500ns min, 1250ns max) Interrupt: pin A routed to IRQ 21 Region 0: Memory at f5100000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable- Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [6c] HyperTransport: MSI Mapping Enable+ Fixed+ Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel The audio device is onboard and has six configurable outputs, two or so are also capable of being an input (if I remember it correctly), but I don't know how to control it under linux. Does somebody know how/whether replacing the RAM could be related to my problem and/or how to fix it?

    Read the article

  • Timeperiod Setting

    - by Alvin G. Matunog
    I am running Nagios 3.2.3 on CentOS 5.7 32bit and I have a bit of a problem scheduling timeperiods. Please help me. Objective, to stop monitoring during server restart (since the server will be restarted and will be restarted automatically because of updates and system backup). The restart is scheduled every 1st Sunday of every month. My monitoring runs 24x7 but during restarts I want to stop the monitoring and resume after 30 mins. So every 1st Sunday I want my schedule to be 00:00-11:30,12:00-24:00. This means that it will stop at 11:30 and resumes on 12:00nn. If I set this on every Sunday there is no problem. But if I set this time on every 1st Sundays, it stops at 11:30 but resumes on the next day (Monday) and not on 12:00nn Sunday. I don't know what I am missing. If I set on regular weekdays there is no problem. But on Offset weekdays (1st Sunday) it doesn`t work the way it should have. Here is my definition; define timeperiod{ name 1st_sunday timeperiod_name 1st_sunday alias No monitoring every 1st Sunday thursday 1 00:00-11:30,12:00-24:00 } define timeperiod{ timeperiod_name irregular alias regular checking use 1st_sunday sunday 08:30-22:00 monday 08:30-22:00 tuesday 08:30-22:00 wednesday 08:30-22:00 thursday 08:30-19:10,19:20-22:00 friday 08:30-22:00 saturday 08:30-22:00 } Can anyone help me? Please? Thank you.

    Read the article

  • mod_wsgi -apache configuration file

    - by Kevin
    guys sorry I'm a newbie to this but I've been following the mod_wsgi configuration tutorial and it's very spotty. In my httpd.conf file I add the virtual host like so: 'Main' server configuration # The directives in this section set up the values used by the 'main' server, which responds to any requests that aren't handled by a definition. These values also provide defaults for any containers you may define later in the file. # All of these directives may appear inside containers, in which case these default settings will be overridden for the virtual host being defined. # ServerName wsgihost DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /myapp /Users/KL/modwsgi/env/myapp.wsgi <Directory "/Users/KL/modwsgi/env"> <Files myapp.wsgi> Order allow,deny Allow from all </Files> </Directory> Now, when I also added in my local host the following: 127.0.1.1 wsgihost but I can't seem to connect. Am I doing something terribly wrong?

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • One Apache server, multiple clients - best practices for config files?

    - by OttaSean
    First time user; please be gentle. :-) (And if you don't like my question I'd be grateful for a comment as to why...) I am doing a contract at a government server shop that provides web services for multiple client groups in other areas of the government. My employer has asked me to look into how other shops, in similar situations, handle configuration files, and whether there are any best practices on the subject. I'm pretty sure there are lots of installations out there running multiple VirtualHosts out of one Apache installation, but surprisingly I couldn't find anything online about how people handle config file layout, so was hoping some of you wise folks on ServerFault might have some thoughts or pointers for me. The current setup - which seems logical to me - is that each client site has its own directory off the root - so: /client/tps-reports/ /client/silly-walks/ /client/ministry-of-magic/ and so on - and each of those directories has a /htdocs, /cgi-bin, and /conf (among others). The main /etc/apache/httpd.conf only contains Include statements (and lots of comments), the last of which is: Include /etc/apache/vhosts/*.conf The vhosts directory contains symlinks: tpsrept.conf - /client/tps-reports/conf/tpsrept.conf sillywk.conf - /client/silly-walks/conf/sillywk.conf mom.conf - /client/ministry-of-magic/mom.conf Each of those .conf files contains the actual NameVirtualHost definition and a gigantic <VirtualHost 192.168.12.34> stanza - which contains all the stuff about the specific site. The idea is that clients have access to what's in their own /client/xx directory, so they can change stuff in the section of the config that is relevant to them. As I mentioned above, that seems fairly logical to me, but I'm wondering if any of you wise folks are aware of potential gotchas with this sort of layout, or any other thoughts on why it is or isn't a good idea. In particular, how do other places do it? Is there a "best practice" for this sort of thing? Many thanks in advance for your time and any thoughts you all might have.

    Read the article

  • how to remove location block from $uri in nginx configuration?

    - by Jason
    I have a rewrite in my ngix conf file that works properly except it seems to include the location block as part of the $uri variable. I only want the path after the location block. My current config code is: location /cargo { try_files $uri $uri/ /cargo/index.php?_REWRITE_COMMAND=$uri&args; } Using an example url of http://localhost/cargo/testpage the redirect works, however the value of the "_REWRITE_COMMAND" parameter received by my php file is "/cargo/testpage". I need to strip off the location block and just have "testpage" as the $uri I am pretty sure there is a regex syntax to split the $uri and assign it to a new variable using $1 $2 etc, but I can't find any example to do just a variable assignment using a regex that is not part of a rewrite statement. I've been looking and trying for hours and I just can't seem to get past this last step. I also know I could just strip this out on the application code, but the reason I want to try to fix it in the nginx conf is for compatibility reasons as it also runs on Apache. I also should say that I have figured out a really hacky way to do it, but it involves an "if" statement to check for file existance and the documentation specifically says not to do it that way. -- UPDATE: ANSWERED BY theuni: The regex goes in the location block definition. one note of caution is that php handler location needs to be ABOVE this location, otherwise you will get a server error because it goes into an infinite redirect loop location ~ ^/cargo/(.*) { try_files $1 /cargo/$1/ /cargo/index.php?_REWRITE_COMMAND=$1&args; }

    Read the article

  • System requirements for running windows 8 (basic office use) in virtualbox (ubuntu as host os)

    - by Tor Thommesen
    I want to run windows 8 as a guest os with virtualbox on some thinkpad (haven't bought one yet) running Ubuntu 12.04. Apart from virtualizing windows 8 (mostly just for use with the office suite app) my needs are very modest, I don't need much more than emacs and a browser. What I'd like to know is what kind of specs will be necessary to run windows 8 well as a vm, using the office apps. It would be a shame to waste money on overpowered hardware. Are there any official guidelines from oracle or windows on this? Would this lenovo x220, for example, be sufficiently strong? The specs below were taken from this review. Intel Core i5-2520M dual-core processor (2.5GHz, 3MB cache, 3.2GHz Turbo frequency) Windows 7 Professional (64-bit) 12.5-inch Premium HD (1366 x 768) LED Backlit Display (IPS) Intel Integrated HD Graphics 4GB DDR3 (1333MHz) 320GB Hitachi Travelstar hard drive (Z7K320) Intel Centrino Advanced-N 6205 (Taylor Peak) 2x2 AGN wireless card Intel 82579LM Gigabit Ethernet 720p High Definition webcam Fingerprint reader 6-cell battery (63Wh) and optional slice battery (65Wh) Dimensions: 12 (L) x 8.2 (W) x 0.5-1.5 (H) inches with 6-cell battery Weight: 3.5 pounds with 6-cell battery 4.875 pounds with 6-cell battery and optional external battery slice Price as configured: $1,299.00 (starting at $979.00)

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • Hibernate session problem for transactions.

    - by Jani
    Hi all, I am new to hibernate and trying integrate hibernate with an existing spring based application. I configured session factory and transaction manager, transaction proxy template. I am also using Quartz scheduler in this application. When I run the application, I am getting the following exception. ERROR au.com.michaelpage.ctsgui.utils.OrganisationMergeProfileThread - Error while updating opportunity: Could not open Hibernate Session for transaction; nested exception is java.lang.IllegalStateException: Already value [org.springframework.jdbc.datasource.ConnectionHolder@9f6885] for key [weblogic.jdbc.common.internal.RmiDataSource@32b034] bound to thread [DefaultQuartzScheduler_Worker-0] My hibernate session configuration: <bean id="sessionFactoryAU" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="profileAU" /> </property> <property name="mappingResources"> <list> <value> /au/com/michaelpage/ctsgui/hibernate/dao/mappings/Opportunity.hbm.xml </value> <value> /au/com/michaelpage/ctsgui/hibernate/dao/mappings/Position.hbm.xml </value> <value> /au/com/michaelpage/ctsgui/hibernate/dao/mappings/EventRole.hbm.xml </value> </list> </property> <property name="hibernateProperties"> <props> <!-- Database Settings --> <prop key="hibernate.dialect"> org.hibernate.dialect.SybaseDialect </prop> <prop key="hibernate.query.factory_class"> org.hibernate.hql.ast.ASTQueryTranslatorFactory </prop> <!-- Cache settings --> <prop key="hibernate.cache.provider_class"> org.hibernate.cache.EhCacheProvider </prop> </props> </property> </bean> <!-- Transaction manager for a Hibernate SessionFactory --> <bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory"> <ref bean="sessionFactoryAU" /> </property> </bean> <!-- Transaction template for Managers --> <bean id="txProxyTemplateHibernateProfileAU" abstract="true" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager"> <ref bean="txManager" /> </property> <property name="transactionAttributes"> <props> <prop key="create*">PROPAGATION_REQUIRED</prop> <prop key="save*">PROPAGATION_REQUIRED</prop> <prop key="update*">PROPAGATION_REQUIRED</prop> <prop key="delete*">PROPAGATION_REQUIRED</prop> <prop key="remove*">PROPAGATION_REQUIRED</prop> <prop key="get*">PROPAGATION_SUPPORTS</prop> </props> </property> </bean> <bean id="organisationMergeProfileMgrAU" parent="txProxyTemplateHibernateProfileAU"> <property name="target"> <bean class="au.com.michaelpage.ctsgui.mgr.profile.OrganisationMergeProfileMgrImpl"> <property name="commonProfileDao"> <ref bean="commonProfileDaoAU" /> </property> <property name="organisationMergeProfileDao"> <ref bean="organisationMergeDaoAU" /> </property> <property name="hibernateOrganisationDAO"> <ref bean="hibernateOrganisationDAOAU" /> </property> <property name="hibernateOpportunityDAO"> <ref bean="hibernateOpportunityDAOAU" /> </property> <property name="hibernatePositionDAO"> <ref bean="hibernatePositionDAOAU" /> </property> <property name="hibernateEventRoleDAO"> <ref bean="hibernateEventRoleDAOAU" /> </property> </bean> </property> </bean> My Quartz scheduler configuration: <bean id="organisationMergeJobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean"> <property name="targetObject" ref="organisationMergeJob" /> <property name="targetMethod" value="execute" /> <property name="concurrent" value="false" /> </bean> <bean id="organisationMergeProfileRegularCheckerTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerBean"> <property name="jobDetail" ref="organisationMergeJobDetail" /> <property name="repeatInterval"> <util:constant static-field="au.com.michaelpage.ctsgui.common.Constants.CHECK_FREQUENCY" /> </property> </bean> Here is the bean definition for 'organisationMergeJob' <bean id="organisationMergeJob" class="au.com.michaelpage.ctsgui.utils.OrganisationMergeProfileThread"> <property name="organisationMergeMgr" ref="organisationMergeMgr"/> </bean> <bean id="organisationMergeMgr" class="au.com.michaelpage.ctsgui.mgr.OrganisationMergeMgrImpl"> <property name="organisationMergeDao" ref="organisationMergeDao"/> </bean> Any help to solve this? Thank you in advance. Hi skaffman, Here is the stack trace of the error: Could not open Hibernate Session for transaction; nested exception is java.lang.IllegalStateException: Already value [org.springframework.jdbc.datasource.ConnectionHolder@5f2fb8] for key [weblogic.jdbc.common.internal.RmiDataSource@326b7b] bound to thread [DefaultQuartzScheduler_Worker-3] Caused by: java.lang.IllegalStateException: Already value [org.springframework.jdbc.datasource.ConnectionHolder@5f2fb8] for key [weblogic.jdbc.common.internal.RmiDataSource@326b7b] bound to thread [DefaultQuartzScheduler_Worker-3] at org.springframework.transaction.support.TransactionSynchronizationManager.bindResource(TransactionSynchronizationManager.java:163) at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:526) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:350) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:262) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:101) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy73.updateEventRole(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy73.updateEventRole(Unknown Source) at au.com.michaelpage.ctsgui.utils.OrganisationMergeProfileThread.execute(OrganisationMergeProfileThread.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:283) at org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(MethodInvokingJobDetailFactoryBean.java:272) at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:86) at org.quartz.core.JobRunShell.run(JobRunShell.java:203) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:520) Thank you. Here is the bean definition for 'organisationMergeMgr' <bean id="organisationMergeMgr" class="au.com.michaelpage.ctsgui.mgr.OrganisationMergeMgrImpl"> <property name="organisationMergeDao" ref="organisationMergeDao"/> </bean>

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >