Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 128/854 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • DKIM passes everywhere apart from Yahoo!

    - by Ian
    Hi, I'm using dkim-milter, Postfix on Ubuntu (I think I used these instructions for setting up). Anyway, using the reflectors such as Port25, BlackOps and Altn.com I get passes for DKIM: X-DKIM: OpenDKIM Filter v2.0.1 medusa.blackops.org o2SGTMSg005616 Authentication-Results: medusa.blackops.org; dkim=pass (1024-bit key) [email protected]; dkim-adsp=pass dkim=pass header.d=example.com (b=miSIxi7TMX; 1:0:good); Authentication-Results: verifier.port25.com header.d=example.com; dkim=pass (matches From: [email protected]); Yahoo gives this: Authentication-Results: mta1031.mail.ukl.yahoo.com from=; domainkeys=neutral (no sig); from=example.com; dkim=permerror (key failed) Where, obviously, example.com is my site address. Is anyone aware of anything different with Yahoo! that would stop these from signing? TIA

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • How to properly configure personal domain to send emails and pass spam filters? Is email forwarding enough?

    - by ChocoDeveloper
    I'm using my own domain from Namecheap, and another company for the mail hosting for my personal email. I configured my domain to forward *@mydomain.com to the account I was given in the mail hosting company. I can send and receive emails, but I'm wondering if the emails I send are being flagged as spam sometimes. I remember when I used my own mail server years ago, there were mechanisms for my domain to say "this mail server is allowed to send emails as [email protected]", like adding a TXT record or something. So the questions are: Is email forwarding enough? Will mail servers understand that the mail server is allowed to send emails on my behalf? Is there a testing mail server where I can send an email and be told whether it thinks it's spam?

    Read the article

  • Altq limits not being applied to UDP transfers

    - by overkordbaever
    I have a OpenBSD server acting as a router/firewall with yhr packet filter ruleset shown below, a linux server, and a linux client. When transferring files (using netcat) by TCP, the limits are applied (for example the 100mbit limit in the example), though when transferring data by UDP, the limits aren't applied; the file always takes the same amount of time no matter the queue bandwidth limit I set (I can even turn off the queues completely, and will still get the same result). Why aren't the queuing rules applied to UDP packages? The rules used: #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low I suppose this may be related a question I've previously asked, though since it's more of a separate question, I suppose a separate question should be used for this

    Read the article

  • How to sum cells depending on the content of a neighbor cell

    - by dannymcc
    I have an Excel document with the following columns; Date | Reference | Amount 23/01/11 | 111111111 | £20.00 25/09/11 | 222222222 | £30.00 11/11/11 | 111111111 | £40.00 01/04/11 | 333333333 | £10.00 31/03/11 | 333333333 | £33.00 20/03/11 | 111111111 | £667.00 21/11/11 | 222222222 | £564.00 I am trying to find a way of summarising the content in the following way; Reference : 111111111 Total: £727 So far the only way I have been able to achieve this is to filter the list by each reference number (manually) and then add a simple SUM formula to the bottom of the list of amounts. Are there any tricks that anyone knows that may speed this up? What I am trying to achieve is a spreadsheet that highlights each reference number that collectively exceeds over £2,000.

    Read the article

  • ffmpeg encoding on QuadCore

    - by Gotys
    I am trying to push my server's CPU cores to the max, but no success. Encoding 2-pass style, set my "-threads" to 128 . When running 2nd pass , the CPU seems to be at 98% usage, but first pass run totally ignores "-threads" option. Using libx264 . Here is my preset: flags=+loop+mv4 cmp=256 partitions=+parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 me_method=hex subq=7 trellis=1 refs=5 bf=3 flags2=+bpyramid+wpred+mixed_refs+dct8x8 coder=1 me_range=16 g=250 keyint_min=25 sc_threshold=40 i_qfactor=0.71 qmin=10 qmax=51 qdiff=4 Is there any reason why the 1st pass is not utilizing my CPUs ? Thank you in advance! This community has always been very kind to me.

    Read the article

  • ffmpeg encoding on QuadCore

    - by Gotys
    I am trying to push my server's CPU cores to the max, but no success. Encoding 2-pass style, set my "-threads" to 128 . When running 2nd pass , the CPU seems to be at 98% usage, but first pass run totally ignores "-threads" option. Using libx264 . Here is my preset: flags=+loop+mv4 cmp=256 partitions=+parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 me_method=hex subq=7 trellis=1 refs=5 bf=3 flags2=+bpyramid+wpred+mixed_refs+dct8x8 coder=1 me_range=16 g=250 keyint_min=25 sc_threshold=40 i_qfactor=0.71 qmin=10 qmax=51 qdiff=4 Is there any reason why the 1st pass is not utilizing my CPUs ? Thank you in advance! This community has always been very kind to me.

    Read the article

  • Intermittent FTP login issues (Microsoft IIS FTP Service)

    - by JaggenSWE
    I've got a somewhat weird problem which I'm not sure how to troubleshoot. We have a FTP running on a Windows Server 2003 machine using the IIS FTP Service, this is for our clients and is configured with IP-restrictions. However, now ONE of the clients starts complaining that they can't log in to the server from time to time. This is just ONE of 10+ clients that have this issue, which makes me think it's a problem on their side. Just to be on the safe side I had a peek into the FTP logs and found something strange. Whenever succeed in loggin in this is what I can find in the logs: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191747]USER, userxxx, -, nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 230, 0, [191747]PASS, -, -, However, if the login fails I see the following events: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191739]USER, userxxx, -, nnn.nnn.nnn.70, -, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 530, 1326, [191739]PASS, -, -, When you look at the event where the clients sends the PASS in the successful login it seems to know that it is infact "userxxx" that is coupled to that PASS, but when it fails it seems to be lost since user in the PASS event is set to "-". Anyone have any ideas around this, any help would be appreciated. :) //JaggenSWE

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? Update I tried changing the certificate attributes, but to no avail: openssl genrsa -des -passout pass:***** -out private/server.key 2048 openssl req -batch -passin pass:***** -new -x509 -nodes -sha256 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf

    Read the article

  • Inaccurate bandwidth limiting in altq queues

    - by overkordbaever
    I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use. I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all. For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit. The config looks like (using a 100mbit limit in the example): #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq(default) #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low

    Read the article

  • Do not play previews of songs in Xbox Music

    - by flooooo
    I am currently using the trial of Xbox Music Pass on Windows 8 and discovered to following problem: A band's album has one or more songs that are not available to be played via Xbox music pass but for purchase on the Xbox Music store. When I choose the option "Play album" it plays the songs available for Xbox music pass streaming completely and for the songs only available for purchase just the preview of 30 seconds. Is there an option to deactivate this by a setting that I just not found or is it currently simply not possible?

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • Puppet inheritance of parametrized classes

    - by paweloque
    I have a situation in puppet where I want to inherit from a parametrized class: class base ($basepath) { ... } class extends_base ($ext_param) inherits base { ... } Now trying to instantiate the extends_base class I get the following error message: Must pass basepath to Class[Base] However, I don't see a way how to pass the basepath parameter to the Base class.. I tried to pass the param in the Class[Extends_base] definition, puppet doesn't like this either.

    Read the article

  • MSBuild condition across projects

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • Mount SMB / AFP 13.10

    - by Jeffery
    I cannot seem to get Ubuntu to mount a mac share via SMB or AFP. I've tried the following... AFP: apt-get install afpfs-ng-utils mount_afp afp://user:password@localip/share /mnt/share Error given: "Could not connect, never got a reponse to getstatus, Connection timed out". Which is odd as I can access the share just fine via Mac. SMB: apt-get install cifs-utils nano /etc/fstab added the following line "//localip/share /mnt/share cifs username=user,password=pass,iocharset=utf8,sec=nltm 0 0" mount -a Error given: root@Asrock:~# mount -a -vvv mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "//10.0.1.3/NAS" mount: node: "/mnt/NAS" mount: types: "cifs" mount: opts: "username=user,password=pass,iocharset=utf8,sec=nltm" mount: external mount: argv[0] = "/sbin/mount.cifs" mount: external mount: argv[1] = "//10.0.1.3/NAS" mount: external mount: argv[2] = "/mnt/NAS" mount: external mount: argv[3] = "-v" mount: external mount: argv[4] = "-o" mount: external mount: argv[5] = "rw,username=user,password=pass,iocharset=utf8,sec=nltm" mount.cifs kernel mount options: ip=10.0.1.3,unc=\\10.0.1.3\NAS,iocharset=utf8,sec=nltm,user=user,pass=* mount error(22): Invalid argument Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I don't really care which it uses I just want it to work! Am I doing something wrong?

    Read the article

  • Test Driven Development (TDD) in Visual Studio 2010- Microsoft Mondays

    - by Hosam Kamel
    November 14th , I will be presenting at Microsoft Mondays a session about Test Driven Development (TDD) in Visual Studio 2010 . Microsoft Mondays is program consisting of a series of Webcasts showcasing various Microsoft products and technologies. Each Monday we discuss a particular topic pertaining to development, infrastructure, Office tools, ERP, client/server operating systems etc. The webcast will be broadcast via Lync and can viewed from a web client. The idea behind the “Microsoft Mondays” program is to help you become more proficient in the products and technologies that you use and help you utilize their full potential.   Test Driven Development in Visual Studio 2010 Level – 300 (  Intermediate – Advanced ) Test Driven Development (TDD), also frequently referred to as Test Driven Design, is a development methodology where developers create software by first writing a unit test, then writing the actual system code to make the unit test pass.  The unit test can be viewed as a small specification around how the system should behave; writing it first helps the developer to focus on only writing enough code to make the test pass, thereby helping ensure a tight, lightweight system which is specifically focused meeting on the documented requirements. TDD follows a cadence of “Red, Green, Refactor.” Red refers to the visual display of a failing test – the test you write first will not pass because you have not yet written any code for it. Green refers to the step of writing just enough code in your system to make your unit test pass – your test runner’s UI will now show that test passing with a green icon. Refactor refers to the step of refactoring your code so it is tighter, cleaner, and more flexible. This cycle is repeated constantly throughout a TDD developer’s workday. Date:   November 14, 2011 Time:  10:00 a.m. – 11:00 a.m. (GMT+3)  http://www.eventbrite.com/event/2437620990/efbnen?ebtv=F   See you there! Hosam Kamel Originally posted at

    Read the article

  • Regular-Expressions.info Thoroughly Updated

    - by Jan Goyvaerts
    RegexBuddy 4 was released earlier this month. This is a major upgrade that significantly improves RegexBuddy’s ability to emulate the features and deficiencies of the latest versions of all the popular regex flavors as well as many past versions of these flavors. Along with that, the Regular-Expressions.info website has been thoroughly updated with new content. Both the tutorial and reference sections have been significantly expanded to cover all the features of the latest regular expression flavors. There are also new tutorial and reference subsections that explain the syntax used by replacement strings when searching and replacing with regular expressions. I’m also reviving this blog. In the coming weeks you can expect blog post that highlight the new topics on the Regular-Expressions.info website. Later on I’ll blog about more intricate regex-related issues that RegexBuddy 4 emulates but that the website doesn’t talk about or only mentions in passing. RegexBuddy 4.0.0 is aware of 574 different aspects (syntactic and behavioral differences) of 94 regular expression flavors. These numbers are surely to grow with future 4.x.x releases. While RegexBuddy juggles it all with ease, that’s far too much detail to cover in a tutorial or reference that any person would want to read. So the tutorial and reference cover the important features and behaviors, while the blog will serve the corner cases as tidbits. Subscribe to the Regex Guru RSS Feed if you don’t want to miss any articles.

    Read the article

  • Clipping polygons in XNA with stencil (not using spritebatch)

    - by Blau
    The problem... i'm drawing polygons, in this case boxes, and i want clip children polygons with its parent's client area. // Class Region public void Render(GraphicsDevice Device, Camera Camera) { int StencilLevel = 0; Device.Clear( ClearOptions.Stencil, Vector4.Zero, 0, StencilLevel ); Render( Device, Camera, StencilLevel ); } private void Render(GraphicsDevice Device, Camera Camera, int StencilLevel) { Device.SamplerStates[0] = this.SamplerState; Device.Textures[0] = this.Texture; Device.RasterizerState = RasterizerState.CullNone; Device.BlendState = BlendState.AlphaBlend; Device.DepthStencilState = DepthStencilState.Default; Effect.Prepare(this, Camera ); Device.DepthStencilState = GlobalContext.GraphicsStates.IncMask; Device.ReferenceStencil = StencilLevel; foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } foreach ( Region child in ChildrenRegions ) { child.Render( Device, Camera, StencilLevel + 1 ); } Effect.Prepare( this, Camera ); // This does not works Device.BlendState = GlobalContext.GraphicsStates.NoWriteColor; Device.DepthStencilState = GlobalContext.GraphicsStates.DecMask; Device.ReferenceStencil = StencilLevel; // This should be +1, but in that case the last drrawed is blue and overlap all foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } } public static class GraphicsStates { public static BlendState NoWriteColor = new BlendState( ) { ColorSourceBlend = Blend.One, AlphaSourceBlend = Blend.One, ColorDestinationBlend = Blend.InverseSourceAlpha, AlphaDestinationBlend = Blend.InverseSourceAlpha, ColorWriteChannels1 = ColorWriteChannels.None }; public static DepthStencilState IncMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.IncrementSaturation, }; public static DepthStencilState DecMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.DecrementSaturation, }; } How can achieve this? EDIT: I've just relized that the NoWriteColors.ColorWriteChannels1 should be NoWriteColors.ColorWriteChannels. :) Now it's clipping right. Any other approach?

    Read the article

  • Screen Resolution stuck at 640x480 after installing Bumblebee

    - by Saurabh Agarwal
    I have a Dell XPS 15z laptop. As you can see here, there are some issues with NVidia drivers. The site recommends installation of Bumblebee (instructions given in the link). I am posting it again for ease: $ sudo add-apt-repository ppa:bumblebee/stable $ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install bumblebee bumblebee-nvidia $ sudo usermod -a -G bumblebee $USER After restarting the computer however, the screen resolution was stuck at 640x480 and I got the following error message as soon as I logged in: **Could not apply the stored configuration for monitors** none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Prior to the update, the display was absolutely normal and thus there is no doubt about the cause. Albeit, there was no support for graphic drivers. In case it helps, some features of graphics drivers seem to be functional after bumblebee, ie, all features are in order except for the resolution. And if the resolution can't be fixed, please suggest a way to retract the changes so that atleast the prior state may be reachieved. Any help in the matter would be highly appreciated.

    Read the article

  • Less Than Four Weeks Away: Oracle OpenWorld Latin America

    - by Oracle OpenWorld Blog Team
    It's only four weeks and counting to Oracle OpenWorld Latin America 2012 in São Paulo. There are dozens of sessions in seven technology tracks that you won't want to miss. And dozens of interesting, innovative, and exciting sponsors and exhibitors you'll want to be sure to talk to in the Exhibition Hall, not to mention the Oracle demos there that you'll want to experience first-hand. There are three ways to experience Oracle OpenWorld Latin America:  The Oracle OpenWorld conference pass gets you access to all keynotes, sessions, demos, labs, networking events, and more The Oracle OpenWorld and JavaOne conference pass gets you access to all of the above, for BOTH Oracle OpenWorld and JavaOne The Discover pass gets you access to the Exhibition Hall, where you'll be able to see and talk with sponsors and exhibitors, and check out all of the Oracle demos The sooner you sign up the more you save. Savings are greatest between now and 16 November. From 17 November you'll still save significantly over the onsite price if you register before 3 December.  And by the way, the Discover pass comes at no charge if you register by 3 December. So don't wait: Register Now!

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Obscure SPUtility.SendMail Behavior When Manually Passing in Mail Headers

    - by Damon
    There are two ways to send mail in SharePoint: you can either use the mail components from the System.Net namespace, or you can send email using SharePoint's SPUtility.SendMail method.  One of the benefits of the SPUtility.SendMail method is that it uses the mail configuration from SharePoint, so you can manage settings in Central Administration instead of having to go through and modify your web.config file.  SPUtility.SendMail can get the job done, but it's defiantly not as developer friendly as the components from the System.Net namespace.  If you want to CC someone on an email, for example, you do NOT have a nice CC parameter - you have to manually add the CC mail header and pass it into the SPUtility.SendMail method.  I had to do this the other day, and ran into a really obscure issue. If you do NOT pass the headers into the method then SharePoint sends the email using the From Address configured in the Outgoing Mail settings in Central Admin.  If you pass headers into the method, but do not include the from header, then SharePoint sends the mail using the email address of the current user. This can be an issue if your mail server is setup to reject an email from an invalid email address or an email address that is not on your domain.  The way to fix this issue is to always pass in the from header.  If you want to use the configured From address, then you can do the following: SPWebApplication webApp = SPWebApplication.Lookup(new Uri(SPContext.Current.Site.Url)); StringDictionary headers = new StringDictionary(); headers.Add("from", webApp.OutboundMailSenderAddress);

    Read the article

  • Are first-class functions a substitute for the Strategy pattern?

    - by Prog
    The Strategy design pattern is often regarded as a substitute for first-class functions in languages that lack them. So for example say you wanted to pass functionality into an object. In Java you'd have to pass in the object another object which encapsulates the desired behavior. In a language such as Ruby, you'd just pass the functionality itself in the form of an annonymous function. However I was thinking about it and decided that maybe Strategy offers more than a plain annonymous function does. This is because an object can hold state that exists independently of the period when it's method runs. However an annonymous function by itself can only hold state that ceases to exist the moment the function finishes execution. So my question is: when using a language that features first-class functions, would you ever use the Strategy pattern (i.e. encapsulate the functionality you want to pass around in an explicit object), or would you always use an annonymous function? When would you decide to use Strategy when you can use a first-class function?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >