Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 13/815 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • High Speed Photographs Capture Pellet Gun Destruction

    - by Jason Fitzpatrick
    What do you get when you combine high speed flash photography, a carefully focused camera, and pellet gun? Gloriously detailed pictures of pellets tearing apart fruit, cans, ceramic gnomes, and more. Alan Sailer has a passion; in his garage studio he photographs all manner of objects–bottles, raspberries, candy, soda cans–at the moment a pellet shot from a pellet gun tears them apart. The results are beautiful and reminescent of early high-speed photos by photography pioneer Edgerton Born. Hit up the link below to check out the collection and read more about his process. Alan Sailer’s High Speed Photographs [via FlavorWire] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Webcast: Optimize Accounts Payable Through Automated Invoice Processing

    - by kellsey.ruppel(at)oracle.com
    Is your accounts payable process still very labor-intensive? Then discover how Oracle can help you eliminate paper, automate data entry and reduce costs by up to 90% - while saving valuable time through fewer errors and faster lookups. Join us on Tuesday, March 22 at 10 a.m. PT for this informative Webcast where Jamie Rancourt and Brian Dirking will show how you can easily integrate capture, forms recognition and content management into your PeopleSoft and Oracle E-Business Suite accounts payable systems. You will also see how The Home Depot, Costco and American Express have achieved tremendous savings and productivity gains by switching to automated solutions. Learn how you can automate invoice scanning, indexing and data extraction to:Improve speed and reduce errors Eliminate time-consuming searches Utilize vendor discounts through faster processing Improve visibility and ensure compliance Save costs in accounts payable and other business processesRegister today!

    Read the article

  • Requirement, architecture data capture tool

    - by Deno
    Are there any tools available for the following use case: I am planning to write a complex application and now I just know about the basic functional requirments. I am refining the functionality with more and more details. I am also writing down how to implement this in software architecture perspective. I am in a situation where there are multiple ways implement a certain functionality and based on the selected approach, other parts of the program will also change. At this moment, I don't want to decide on the approach to use, I just want to list down all the options, present to a wider audience and get it finalized. Are there any standard software tools to do this? I know I can use MS excel or some mind mapper tools, but just thinking of some standard tools (not just for programmers, but for managers and other) availability Thanks, Den

    Read the article

  • Sorting Algorithm : output

    - by Aaditya
    I faced this problem on a website and I quite can't understand the output, please help me understand it :- Bogosort, is a dumb algorithm which shuffles the sequence randomly until it is sorted. But here we have tweaked it a little, so that if after the last shuffle several first elements end up in the right places we will fix them and don't shuffle those elements furthermore. We will do the same for the last elements if they are in the right places. For example, if the initial sequence is (3, 5, 1, 6, 4, 2) and after one shuffle we get (1, 2, 5, 4, 3, 6) we will keep 1, 2 and 6 and proceed with sorting (5, 4, 3) using the same algorithm. Calculate the expected amount of shuffles for the improved algorithm to sort the sequence of the first n natural numbers given that no elements are in the right places initially. Input: 2 6 10 Output: 2 1826/189 877318/35343 For each test case output the expected amount of shuffles needed for the improved algorithm to sort the sequence of first n natural numbers in the form of irreducible fractions. I just can't understand the output.

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • Capture an area of a game, display it in a small window

    - by steakbbq
    I am looking to make a program that accomplishes some simple goals. I need to be able to specify an area of my screen to have reproduced in a window. Similar to who the windows magnifier works. I also need it to stay on top. I also need it to be transparent. I also need it to be ghost like(mouse clicks go through it) so the application below can be interacted with still. Here is what I am trying to do. What would be the best way to go about it? http://i.imgur.com/0ahi7.jpg

    Read the article

  • Should I put my output files in source control?

    - by sebastiaan
    I've been asked to put every single file in my project under source control, including the database file (not the schema, the complete file). This seems wrong to me, but I can't explain it. Every resource I find about source control tells me not to put generated output files in a source control system. And I understand, it's not "source" files. However, I've been presented with the following reasoning: Who cares? We have plenty of bandwidth. I don't mind having to resolve a conflict each time I get the latest revision, it's just one click It's so much more convenient than having to think about good ignore files Also, if I have to add an external DLL file in the bin folder now, I can't forget to put it in source control, as the bin folder is not being ignored now. The simple solution for the last bullet-point is to add the file in a libraries folder and reference it from the project. Please explain if and why putting generated output files under source control is wrong.

    Read the article

  • How do you make Python wait so that you can read the output?

    - by anonnoir
    I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python. My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read. (My apologies if this question is a silly one, but I'm very new at Python and programming in general.)

    Read the article

  • Pixel Shader Giving Black output

    - by Yashwinder
    I am coding in C# using Windows Forms and the SlimDX API to show the effect of a pixel shader. When I am setting the pixel shader, I am getting a black output screen but if I am not using the pixel shader then I am getting my image rendered on the screen. I have the following C# code using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using System.Runtime.InteropServices; using SlimDX.Direct3D9; using SlimDX; using SlimDX.Windows; using System.Drawing; using System.Threading; namespace WindowsFormsApplication1 { // Vertex structure. [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public float Tu; public float Tv; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } public static VertexFormat Format { get { return VertexFormat.Position | VertexFormat.Texture1; } } } static class Program { public static Device D3DDevice; // Direct3D device. public static VertexBuffer Vertices; // Vertex buffer object used to hold vertices. public static Texture Image; // Texture object to hold the image loaded from a file. public static int time; // Used for rotation caculations. public static float angle; // Angle of rottaion. public static Form1 Window =new Form1(); public static string filepath; static VertexShader vertexShader = null; static ConstantTable constantTable = null; static ImageInformation info; [STAThread] static void Main() { filepath = "C:\\Users\\Public\\Pictures\\Sample Pictures\\Garden.jpg"; info = new ImageInformation(); info = ImageInformation.FromFile(filepath); PresentParameters presentParams = new PresentParameters(); // Below are the required bare mininum, needed to initialize the D3D device. presentParams.BackBufferHeight = info.Height; // BackBufferHeight, set to the Window's height. presentParams.BackBufferWidth = info.Width+200; // BackBufferWidth, set to the Window's width. presentParams.Windowed =true; presentParams.DeviceWindowHandle = Window.panel2 .Handle; // DeviceWindowHandle, set to the Window's handle. // Create the device. D3DDevice = new Device(new Direct3D (), 0, DeviceType.Hardware, Window.Handle, CreateFlags.HardwareVertexProcessing, presentParams); // Create the vertex buffer and fill with the triangle vertices. (Non-indexed) // Remember 3 vetices for a triangle, 2 tris per quad = 6. Vertices = new VertexBuffer(D3DDevice, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.None, Pool.Managed); DataStream stream = Vertices.Lock(0, 0, LockFlags.None); stream.WriteRange(BuildVertexData()); Vertices.Unlock(); // Create the texture. Image = Texture.FromFile(D3DDevice,filepath ); // Turn off culling, so we see the front and back of the triangle D3DDevice.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting D3DDevice.SetRenderState(RenderState.Lighting, false); ShaderBytecode sbcv = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\vertexShader.vs", "vs_main", "vs_1_1", ShaderFlags.None); constantTable = sbcv.ConstantTable; vertexShader = new VertexShader(D3DDevice, sbcv); ShaderBytecode sbc = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\pixelShader.txt", "ps_main", "ps_3_0", ShaderFlags.None); PixelShader ps = new PixelShader(D3DDevice, sbc); VertexDeclaration vertexDecl = new VertexDeclaration(D3DDevice, new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.PositionTransformed, 0), new VertexElement(0, 12, DeclarationType.Float2 , DeclarationMethod.Default, DeclarationUsage.TextureCoordinate , 0), VertexElement.VertexDeclarationEnd }); Application.EnableVisualStyles(); MessagePump.Run(Window, () => { // Clear the backbuffer to a black color. D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); // Begin the scene. D3DDevice.BeginScene(); // Setup the world, view and projection matrices. //D3DDevice.VertexShader = vertexShader; //D3DDevice.PixelShader = ps; // Render the vertex buffer. D3DDevice.SetStreamSource(0, Vertices, 0, Vertex.SizeBytes); D3DDevice.VertexFormat = Vertex.Format; // Setup our texture. Using Textures introduces the texture stage states, // which govern how Textures get blended together (in the case of multiple // Textures) and lighting information. D3DDevice.SetTexture(0, Image); // Now drawing 2 triangles, for a quad. D3DDevice.DrawPrimitives(PrimitiveType.TriangleList , 0, 2); // End the scene. D3DDevice.EndScene(); // Present the backbuffer contents to the screen. D3DDevice.Present(); }); if (Image != null) Image.Dispose(); if (Vertices != null) Vertices.Dispose(); if (D3DDevice != null) D3DDevice.Dispose(); } private static Vertex[] BuildVertexData() { Vertex[] vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].Tu = 0.0f; vertexData[0].Tv = 0.0f; vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].Tu = 0.0f; vertexData[1].Tv = 1.0f; vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].Tu = 1.0f; vertexData[2].Tv = 0.0f; vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].Tu = 0.0f; vertexData[3].Tv = 1.0f; vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].Tu = 1.0f; vertexData[4].Tv = 1.0f; vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].Tu = 1.0f; vertexData[5].Tv = 0.0f; return vertexData; } } } And my pixel shader and vertex shader code are as following // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; // Global variables sampler2D Tex0; // Name: Simple Pixel Shader // Type: Pixel shader // Desc: Fetch texture and blend with constant color // PS_OUTPUT ps_main( in PS_INPUT In ) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup Out.Color *= float4(0.9f, 0.8f, 0.0f, 1); //do a simple effect return Out; //return output pixel } // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; // Name: Simple Vertex Shader // Type: Vertex shader // Desc: Vertex transformation and texture coord pass-through // VS_OUTPUT vs_main( in VS_INPUT In ) { VS_OUTPUT Out; //create an output vertex Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; //return output vertex }

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • [iptables] Why do 'iptables -A OUTPUT -j REJECT' at the end of the chain OUTPUT override the previous rules??

    - by Serge
    Those are my IPTABLES rules: iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p udp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 180 --hitcount 4 --name DEFAULT --rsource -j DROP iptables -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT iptables -A OUTPUT -j REJECT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT Im using a remote ssh conetion to set them up, but after i set: iptables -A OUTPUT -j REJECT My connection get lost. I have read all the documentation for Iptables and i can figure out anything, the global Rejects for INPUT work well because i can access to the web page but i get a timeout for ssh. Any idea? Thanks

    Read the article

  • Digital Audio Output light on on MacBook Pro

    - by Emerson Hsieh
    I don't know if this problem happened when I installed Ubuntu before. Recently I noticed that when I boot Ubuntu, the Digital Audio Output light automatically switches on. Digital Audio Output light on means "Something wrong in the headphone port". Although my headphone is working in Ubuntu. I've heard that the headphone contains some magical "switch" that will fix the light problem. So I poked the headphone port with chopsticks, pens, paper clips, even my finger, and the Digital Audio Output light still stays on. I don't have this problem in OSX. How do I switch the light off?

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • play sound from different applications on different output devices (speakers, headphones)

    - by Mike
    I want music (played e.g. via Audacious) to be played on speaker system, and all other sounds produced by other applications (including ubuntu sound effects) to be played on headphones. My computer has sound connectors at its back, and also connectors on its front panel. When I connect both headphones and speakers, only headphones work (I take it, the front connectors take precedence?). Should I purchase another sound card (in addition to the motherboard-integrated sound I have)? When I go to Audacious output settings, I see only the ouput plugin selection list, with PulseAudio selected and options like ALSA, OSS4 etc. But there's no facility to select particular output device (and I guess it wouldn't magically appear even if I had the second soundcard). Is this at all possible to bind specific application to particular output device?

    Read the article

  • Video Capture API remove dialog when capturing from two cameras

    - by sqlmaster
    Hi , I am trying to capture images from two cameras using avicap32.dll messages.I am able to capture images with the first camera but when the second camera start o capture it shows me the dialog "Select a video device".My application dosn't have interaction with the user so I need to select the second camera programatically. can anyone help me?

    Read the article

  • LIBPCAP and WIRESHARK Capture on PPP

    - by user655629
    Hi, I have written a small bridge program using LIBPCAP API. I have installed Winpcap 3.1 Beta for support in order to capture from a PPP interface. What i do is, I capture from the PPP interface through my LIBPCAP program and send the traffic to another Ethernet interface in my computer. Then i connect this Ethernet Interface to another Ethernet Interface at another computer where i monitor it through Wireshark. So in short my PPP-Ethernet Bridge is on computer 1. And Another computer2 directly connected to computer1 on Ethernet monitors the incoming traffic from the bridge through wireshark. The problem i face is that when i capture PPP traffic through wireshark in computer1, i see reasonable delay between the packets. But when i use my LIBPCAP program to capture and relay traffic and check the traffic on computer 2 using Wireshark it gives jumps of 0.5seconds delay after some packets. This is quite unexplainable to me. I dont understand how wireshark PPP direct capture on computer 1 does not give delay and LIBPCAP program is giving delay. I have checked my bridge for Ethernet to Ethernet relaying and there is no delay like the one i am experiencing in case of PPP-Ethernet. a higher delay between packets is acceptable but such a BIG delay after a couple of packets is unacceptable. Please help if you can. Best Regards FIKA

    Read the article

  • Template Matching 2 ROI in a single video capture in real time

    - by YS
    Hi, I am working on a project to perform template matching on a video captured via my webcam. I am able to create 2 template from the webcam capture, but I am unable to perform template matching for both. The program can run only with either template matching, but not both. My program sequence is: Capture from webcam get template 1 get template 2 perform template 1 matching with webcam capture then perform template 2 matching with webcam capture if fail, stop. Can any expert advice me on this?

    Read the article

  • Getting input and output from a jar file run from java class?

    - by Jack L.
    Hi, I have a jar file that runs this code: public class InputOutput { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { boolean cont = true; BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); while (cont) { System.out.print("Input something: "); String temp = in.readLine(); if (temp.equals("end")) { cont = false; System.out.println("Terminated."); } else System.out.println(temp); } } } I want to program another java class that executes this jar file and can get the input and send output to it. Is it possible? The current code I have is this but it is not working: public class JarTest { /** * Test input and output of jar files * @author Jack */ public static void main(String[] args) { try { Process io = Runtime.getRuntime().exec("java -jar InputOutput.jar"); BufferedReader in = new BufferedReader(new InputStreamReader(io.getInputStream())); OutputStreamWriter out = new OutputStreamWriter(io.getOutputStream()); boolean cont = true; BufferedReader consolein = new BufferedReader(new InputStreamReader(System.in)); while (cont) { String temp = consolein.readLine(); out.write(temp); System.out.println(in.readLine()); } } catch (IOException e) { e.printStackTrace(); } } } Thanks for your help

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >