Search Results

Search found 5501 results on 221 pages for 'receive'.

Page 101/221 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Postfix does not get external emails

    - by Marks
    Hi there. I have a small vserver(ubuntu), on which i want to run mailman and therefore have to configure postfix. I already configured postfix to send mail per relay over gmail (server-ext.). I also receive local emails (server-server), but i dont get emails from ext. (ext. - server). I have a domain routed to the vserver. But when i send an email from external (e.g. from gmail) to [email protected] i don't get it into the inbox. But there is also no error return message i get to my gmail account. What could be a possible problem? Do i have to configure email revieving from external somewhere? Thanks for any help.

    Read the article

  • Slow remote desktop connection to VPS

    - by Jonathan
    When I use Windows 7's remote desktop connection to our company's VPS (Win Server 2008 32bit) I receive a very slow connection - most of the times it actually grinds down to a complete halt. This is in contrast to my team mates which have no problem remoting to the VPS. I'm using a brand new Dell Studio 1558 laptop with Intel Core i7 and 4GB RAM with a clean installation of Windows 7 64bit Ultimate. Any suggestions for how to diagnose \ solve \ workaround the problem would be appreciated. UPDATES: I checked, and it seems the problem exists with all the computers connected to my home LAN. Once I take my laptop to the nearest coffee shop it works fine. What could be the problem with the LAN?

    Read the article

  • What protocols will/are ISPs use for IPv6 deployment?

    - by rbeede
    Currently ISPs deal out addresses via DHCP for IPv4 dynamic (single) addresses. What protocol will/are ISPs going to use for IPv6 when they can hand a customer an entire /64 (or /48 if they are nice) block? DHCPv6, RA? For ISPs that support true end-to-end IPv6 will they provide gateway devices (similar to cable modem or true DSL bridges for example) that receive border information for that specific customer? I'm just trying to get an idea of how your common residential service customer will have to configure things in an IPv6 Internet (whenever that comes). Will it be something customers are expected to statically configure on their home wireless router? Today with IPv4 I do it like this: Modem (bridge) passes public IPv4 obtained via DHCPv4 from ISP to second device (wireless router). It in turn has its own DHCPv4 service it provides on the internal lan.

    Read the article

  • How should a small team using multiple OS's deploy over github?

    - by Toby
    We have a small development team that have recently moved to using github to host our projects. The team consists of three developers, 2 on Windows and 1 on Mac. I am currently researching the best way to deploy applications to our Linux servers (dev and production). Capistrano running locally would be ideal but from what I read this won't work for Windows machines. It looks like the best way is to use a post-receive hook in github, I can see how this would work for auto deploying to dev, but I don't see how we could then deploy to live. I have found paid projects like http://www.deployhq.com/ but it feels like something that a quick bit of code should be able to do for free, I just can't seem to get myself pointed in the right direction! I was wondering what would be considered best practice for small team deployment involving multiple local OS's and github.

    Read the article

  • Tomcat hangs in shutdown

    - by Morven
    I have a Tomcat server that won't shut down. It is listening on the correct port (8005, for this one) to receive the SHUTDOWN command. I can issue that command, either with the bin/shutdown.sh script or by telnetting to that port and typing SHUTDOWN. At this point, the shutdown port closes; I can no longer connect to it. The AJP13 port stays open, though; nothing is logged in catalina.out, and things don't shut down. Anyone seen this before? This is on Solaris 10 on Sparc, if it matters (it probably doesn't) and Tomcat version 6.0.20.

    Read the article

  • Wifi Signal Strengths

    - by Phorce
    I hope that I have asked this question in the right topic. Basically I use BT (In the U.K.) and I have problems with my router receiving a strong signal in certain rooms of the house, which, therefore makes the internet really slow in some cases it won't work at all. I have had experience working with Virgin Media (UK) before and just changed the channel number and this worked fine, but, do not know if I would receive the same outcome on BT. Could anyone suggest any other things that I could try alongside of the channel change in order to /hopefully/ get a stronger signal, without having to install Ethernet cable?

    Read the article

  • Accept and Decline button missing in thunderbird when receiving an invitation

    - by Laurent Lugon
    I'm using Thunderbird 12.0.1 with the following add-ons for the calendar : Lightning 1.4 Provider for Google calendar 0.9 I'm working with Ubuntu 12.04 My problem is when I receive an event invitation from outlook or gmail, the mail is correctly identified by Thunderbird (I have the message "this message contains an invitation to an event"). In the pretty little box in the mail (the one which contains all the details), there is no "Accpet" or "Decline" button... I found on the Lightning forum that this bug was fixed in Lightning V.1.0b2 So my question is : why do I still have this bug ??? Best regards

    Read the article

  • Setting up a local mail server

    - by KriiV
    This is what I want and I am having issues finding a solution. I have a number of websites (around 5) each with an email account. I have a server at my office and I would like to centralize it. I have a workstation too. What I want to happen is for the server to receive all emails from all those websites (from the web servers) and then connect my workstation to my local server to grab the emails from there. As the server downloads the emails, I would like them to be stored. Also, if I connect another workstation, I want the 2 workstations to sync. So if an email is read on one, it shows up as read on the other. Ideas? I am able to virtualize a Linus environment if that helps.

    Read the article

  • How to remove permanent map of a network drive on OS X Lion?

    - by Flijfi
    Some time ago I mapped a network drive on my Snow Leopard Mac, which was upgraded to Lion. The network drive is not active any more and I receive popups all the time with the error: There was a problem connecting to the server XXXX. I have no idea how I configured at the time. I may have included a mount command, in a config file but I don't know any more where I did it. I reviewed the Preferences/Account/Login items and there is no permanent mapping there. OSX is updated as Nov 27,2011 and the issue is not related to the upgrade to Lion itself but to a misconfiguration. Any help will be greatly appreciated. (If you have the opposite problem, here is the link to solve it: Permanently map a network drive on Mac OS X Leopard)

    Read the article

  • How can I forward certain emails based on header information with Postfix?

    - by Jason Novinger
    We receive service requests via a particular email. The request is then forwarded to other addresses, using an entry in virtual_alias_maps. Upon seeing the word "EMERGENCY" in the subject line of a request to this email, I would also like to forward this to another address (an alias of our administrator's SMS email addresses). I think I can accomplish this with header checks and the REDIRECT command. However, REDIRECT only sends it to the redirected address, not the forwarded addresses. In the case of "EMERGENCY" I would like it to go to the redirect address and the original forwarded addresses. I am fairly new to Postfix and I feel like I am missing something here. Any suggestions?

    Read the article

  • how to stop deferred emails

    - by Will K
    I have a postfix mail gateway. At the same time, every other host is set to use this gateway as the relay. We have some automated outgoing emails sent from some hosts. I believe the gateway trys to send a deferred status back to the system started this. But that system is a null client, which sends but not receive any email Is there anyway to stop sending the deferred status? e.g. postfix/smtp[35725]: 2F6A155C256: to=, relay=none, delay=260862, delays=260862/0.01/0/0, dsn=4.4.1, status=deferred (connect to orange.mydom.com[192.168.1.5]:25: Connection refused) Thanks

    Read the article

  • TCP handshake ok, then the client isn't receiving any packets from the server

    - by infgeoax
    Topology: Client ----- Intermediate Device ----- Server Client: win7 Intermediate Device: unknown Server: CentOS 5.8 The problem occurs when the client and server are trying to establish a SSL connection. It happens to one specific port, 2000. I haven't been able to replicate the problem with other port numbers. I captured packets on both client and server. After the TCP handshake, from the client's perspective, it's not receiving ACKs for its previously sent packets so it kept re-sending them. On the server side, however, it did receive those packets and sent ACK packets. The weird thing is, after the server sent those ACKs, it received a [RST, ACK] packet, from the intermediate device, for every packet it sent. What could be the cause?

    Read the article

  • Good free way to clone a hard drive or a partition and send the image over the network (through FTP, Windows file sharing, "anything")?

    - by Deleted
    What I ideally would like is a free software solution which can: Boot from a CD/DVD/USB-stick and Clone a complete hard drive or a partition and Send the resulting image file over the network through Windows file sharing (SMB, I could use SAMBA on my server to receive the image) or through FTP or through SFTP or through SCP It should work with Linux and Windows file-systems (where specific file system support is necessary) Is there anything good out there like this? I know Wikipedia lists a lot of cloning software. But I'm looking for a personal recommendation which you have used yourself, as I find it more credible (I'll see from the upvotes if the answer is liked by a lot of visitors).

    Read the article

  • Chrome "Sending request..." indefinitely

    - by SemMike
    After some time browsing in Chrome (30 minutes on average, varies widely), it suddenly can't receive any data from the Internet whatsoever. Trying to go to any address displays the "Sending request..." message indefinitely. This happens out of the blue, not after doing anything special. Other browsers continue to work perfectly. Quitting and relaunching Chrome always solves the problem, but nothing else works. This problem is really annoying; it has been that way for many versions, even the latest one as of now. Has anyone else experienced this problem? How can I fix it?

    Read the article

  • Is it possible to log a user in a remote computer using ssh?

    - by El_Hoy
    I want to connect to a server via ssh and log in (remotely) a user in X11 (gdm). A little context: I need to install a wine application in 30 computers, but wine require X11, there is nobody loged there, so wine does not work properly. I want to remotely login in display=:0.0 a user so this user receive the window (it only start and close), there i need to ()neThere is no one logged on there. I need to start a graphical app there (wine installer) but I cannot because it needs a display with X11 (to open a wineconsole). Resumen: Is it posible to log a user remotely on X11

    Read the article

  • Virtual IP, and Reverse Proxying Ports (Making up terms)

    - by macintosh264
    So here is the exact situation that I have I have 2 game servers in my house. One on port 25565, and the second on 25567. I have only one IP in my house I need to get a "virtual IP" for the second server. Some way of giving the computer that runs these game servers a second IP (linux) I need the Virtual IP to receive connections on 25565 and forward the data to 25567. Although if linux recognizes the second IP in networking I assume I can bind to the second IP on port 25565

    Read the article

  • Monit mail alert failed

    - by user119720
    I have configure our Monit to monitor some of the application in our linux box (httpd,mysqld,etc...).We can receive alerts when using gmail SMTP to send email through it but it failed when we are using our exchange SMTP. Here are the gmail configuration in the monitrc : set mailserver smtp.gmail.com port 587 # primary mailserver username "[email protected]" password "mypasswd" using tlsv1 with timeout 30 seconds and it failed when I changed it to this configuration : set mailserver outlook.automanage.net port 587 # primary mailserver username "[email protected]" password "mypasswd" using tlsv1 with timeout 30 seconds I can telnet my exchange server,so the exchange server is alive and can be connected. Did I miss anything here?Or do I need to need configure something in our exchange server?

    Read the article

  • All created VPNs result in Error 720

    - by campo
    I have a few different VPNs I need to connect to. I'm using windows 8 and standard ptpp VPN. Every VPN I have created though results in the same error: Error 720: A connection to the remote computer could not be established You might need to change the network settings for this connection The VPN works in Windows 7 and XP. I'm not sure if this has something to do with my new laptop or if its something with Windows 8 itself. Any help would be greatly appreciated and if there is any extra info I can provide please let me know. Edit: I've tried with virus scanner turned off and windows firewall turned off as well and I still receive the same error message.

    Read the article

  • PHP file write permission

    - by user125862
    I have two files, one php file and one log file. The permission has been changed to the following: -rwxr-x--- 1 www-data www-data 294 2012-06-25 10:17 function.php -rwxr-x--- 1 www-data www-data 0 2012-06-25 09:53 log.txt The permission is set to 750. when I call function.php, I receive the following error message fopen(log.txt): failed to open stream: Permission denied This is the line : $fp = fopen("log.txt","a"); I am very confused. I have php and the file to be written both under www-data now, so why would there be any permission problem? Please help

    Read the article

  • How to run script from root as another user (with user PATH)

    - by Sandra
    I would like to have these commands run as the ss user from root mkdir bin cp -r /opt/gitolite . gitolite/install -ln gitolite setup -pk ss.pub mkdir -p .gitolite/hooks/common ln -s /opt/pre-receive .gitolite/hooks/common/ so everything is executed in /home/ss. The 4th line requires $HOME/bin as you can see from the 3rd line. The only way I can get it to work is by adding su -c "command" ss to each line, which is not a nice hack. This is an extension to my previous question, where I wasn't precise enough. Question How do I run all these commands as a script in a practical way?

    Read the article

  • Access violation in DirectX OMSetRenderTargets

    - by IDWMaster
    I receive the following error (Unhandled exception at 0x527DAE81 (d3d11_1sdklayers.dll) in Lesson2.Triangles.exe: 0xC0000005: Access violation reading location 0x00000000) when running the Triangle sample application for DirectX 11 in D3D_FEATURE_LEVEL_9_1. This error occurs at the OMSetRenderTargets function, as shown below, and does not happen if I remove that function from the program (but then, the screen is blue, and does not render the triangle) //// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF //// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO //// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A //// PARTICULAR PURPOSE. //// //// Copyright (c) Microsoft Corporation. All rights reserved #include #include #include "DirectXSample.h" #include "BasicMath.h" #include "BasicReaderWriter.h" using namespace Microsoft::WRL; using namespace Windows::UI::Core; using namespace Windows::Foundation; using namespace Windows::ApplicationModel::Core; using namespace Windows::ApplicationModel::Infrastructure; // This class defines the application as a whole. ref class Direct3DTutorialViewProvider : public IViewProvider { private: CoreWindow^ m_window; ComPtr m_swapChain; ComPtr m_d3dDevice; ComPtr m_d3dDeviceContext; ComPtr m_renderTargetView; public: // This method is called on application launch. void Initialize( _In_ CoreWindow^ window, _In_ CoreApplicationView^ applicationView ) { m_window = window; } // This method is called after Initialize. void Load(_In_ Platform::String^ entryPoint) { } // This method is called after Load. void Run() { // First, create the Direct3D device. // This flag is required in order to enable compatibility with Direct2D. UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; #if defined(_DEBUG) // If the project is in a debug build, enable debugging via SDK Layers with this flag. creationFlags |= D3D11_CREATE_DEVICE_DEBUG; #endif // This array defines the ordering of feature levels that D3D should attempt to create. D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_1 }; ComPtr d3dDevice; ComPtr d3dDeviceContext; DX::ThrowIfFailed( D3D11CreateDevice( nullptr, // specify nullptr to use the default adapter D3D_DRIVER_TYPE_HARDWARE, nullptr, // leave as nullptr if hardware is used creationFlags, // optionally set debug and Direct2D compatibility flags featureLevels, ARRAYSIZE(featureLevels), D3D11_SDK_VERSION, // always set this to D3D11_SDK_VERSION &d3dDevice, nullptr, &d3dDeviceContext ) ); // Retrieve the Direct3D 11.1 interfaces. DX::ThrowIfFailed( d3dDevice.As(&m_d3dDevice) ); DX::ThrowIfFailed( d3dDeviceContext.As(&m_d3dDeviceContext) ); // After the D3D device is created, create additional application resources. CreateWindowSizeDependentResources(); // Create a Basic Reader-Writer class to load data from disk. This class is examined // in the Resource Loading sample. BasicReaderWriter^ reader = ref new BasicReaderWriter(); // Load the raw vertex shader bytecode from disk and create a vertex shader with it. auto vertexShaderBytecode = reader-ReadData("SimpleVertexShader.cso"); ComPtr vertexShader; DX::ThrowIfFailed( m_d3dDevice-CreateVertexShader( vertexShaderBytecode-Data, vertexShaderBytecode-Length, nullptr, &vertexShader ) ); // Create an input layout that matches the layout defined in the vertex shader code. // For this lesson, this is simply a float2 vector defining the vertex position. const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; ComPtr inputLayout; DX::ThrowIfFailed( m_d3dDevice-CreateInputLayout( basicVertexLayoutDesc, ARRAYSIZE(basicVertexLayoutDesc), vertexShaderBytecode-Data, vertexShaderBytecode-Length, &inputLayout ) ); // Load the raw pixel shader bytecode from disk and create a pixel shader with it. auto pixelShaderBytecode = reader-ReadData("SimplePixelShader.cso"); ComPtr pixelShader; DX::ThrowIfFailed( m_d3dDevice-CreatePixelShader( pixelShaderBytecode-Data, pixelShaderBytecode-Length, nullptr, &pixelShader ) ); // Create vertex and index buffers that define a simple triangle. float3 triangleVertices[] = { float3(-0.5f, -0.5f,13.5f), float3( 0.0f, 0.5f,0), float3( 0.5f, -0.5f,0), }; D3D11_BUFFER_DESC vertexBufferDesc = {0}; vertexBufferDesc.ByteWidth = sizeof(float3) * ARRAYSIZE(triangleVertices); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; vertexBufferData.pSysMem = triangleVertices; vertexBufferData.SysMemPitch = 0; vertexBufferData.SysMemSlicePitch = 0; ComPtr vertexBuffer; DX::ThrowIfFailed( m_d3dDevice-CreateBuffer( &vertexBufferDesc, &vertexBufferData, &vertexBuffer ) ); // Once all D3D resources are created, configure the application window. // Allow the application to respond when the window size changes. m_window-SizeChanged += ref new TypedEventHandler( this, &Direct3DTutorialViewProvider::OnWindowSizeChanged ); // Specify the cursor type as the standard arrow cursor. m_window-PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0); // Activate the application window, making it visible and enabling it to receive events. m_window-Activate(); // Enter the render loop. Note that tailored applications should never exit. while (true) { // Process events incoming to the window. m_window-Dispatcher-ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent); // Specify the render target we created as the output target. ID3D11RenderTargetView* targets[1] = {m_renderTargetView.Get()}; m_d3dDeviceContext-OMSetRenderTargets( 1, targets, NULL // use no depth stencil ); // Clear the render target to a solid color. const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f }; //Code fails here m_d3dDeviceContext-ClearRenderTargetView( m_renderTargetView.Get(), clearColor ); m_d3dDeviceContext-IASetInputLayout(inputLayout.Get()); // Set the vertex and index buffers, and specify the way they define geometry. UINT stride = sizeof(float3); UINT offset = 0; m_d3dDeviceContext-IASetVertexBuffers( 0, 1, vertexBuffer.GetAddressOf(), &stride, &offset ); m_d3dDeviceContext-IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Set the vertex and pixel shader stage state. m_d3dDeviceContext-VSSetShader( vertexShader.Get(), nullptr, 0 ); m_d3dDeviceContext-PSSetShader( pixelShader.Get(), nullptr, 0 ); // Draw the cube. m_d3dDeviceContext-Draw(3,0); // Present the rendered image to the window. Because the maximum frame latency is set to 1, // the render loop will generally be throttled to the screen refresh rate, typically around // 60Hz, by sleeping the application on Present until the screen is refreshed. DX::ThrowIfFailed( m_swapChain-Present(1, 0) ); } } // This method is called before the application exits. void Uninitialize() { } private: // This method is called whenever the application window size changes. void OnWindowSizeChanged( _In_ CoreWindow^ sender, _In_ WindowSizeChangedEventArgs^ args ) { m_renderTargetView = nullptr; CreateWindowSizeDependentResources(); } // This method creates all application resources that depend on // the application window size. It is called at app initialization, // and whenever the application window size changes. void CreateWindowSizeDependentResources() { if (m_swapChain != nullptr) { // If the swap chain already exists, resize it. DX::ThrowIfFailed( m_swapChain-ResizeBuffers( 2, 0, 0, DXGI_FORMAT_R8G8B8A8_UNORM, 0 ) ); } else { // If the swap chain does not exist, create it. DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0}; swapChainDesc.Stereo = false; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.Scaling = DXGI_SCALING_NONE; swapChainDesc.Flags = 0; // Use automatic sizing. swapChainDesc.Width = 0; swapChainDesc.Height = 0; // This is the most common swap chain format. swapChainDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // Don't use multi-sampling. swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; // Use two buffers to enable flip effect. swapChainDesc.BufferCount = 2; // We recommend using this swap effect for all applications. swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // Once the swap chain description is configured, it must be // created on the same adapter as the existing D3D Device. // First, retrieve the underlying DXGI Device from the D3D Device. ComPtr dxgiDevice; DX::ThrowIfFailed( m_d3dDevice.As(&dxgiDevice) ); // Ensure that DXGI does not queue more than one frame at a time. This both reduces // latency and ensures that the application will only render after each VSync, minimizing // power consumption. DX::ThrowIfFailed( dxgiDevice-SetMaximumFrameLatency(1) ); // Next, get the parent factory from the DXGI Device. ComPtr dxgiAdapter; DX::ThrowIfFailed( dxgiDevice-GetAdapter(&dxgiAdapter) ); ComPtr dxgiFactory; DX::ThrowIfFailed( dxgiAdapter-GetParent( __uuidof(IDXGIFactory2), &dxgiFactory ) ); // Finally, create the swap chain. DX::ThrowIfFailed( dxgiFactory-CreateSwapChainForImmersiveWindow( m_d3dDevice.Get(), DX::GetIUnknown(m_window), &swapChainDesc, nullptr, // allow on all displays &m_swapChain ) ); } // Once the swap chain is created, create a render target view. This will // allow Direct3D to render graphics to the window. ComPtr backBuffer; DX::ThrowIfFailed( m_swapChain-GetBuffer( 0, __uuidof(ID3D11Texture2D), &backBuffer ) ); DX::ThrowIfFailed( m_d3dDevice-CreateRenderTargetView( backBuffer.Get(), nullptr, &m_renderTargetView ) ); // After the render target view is created, specify that the viewport, // which describes what portion of the window to draw to, should cover // the entire window. D3D11_TEXTURE2D_DESC backBufferDesc = {0}; backBuffer-GetDesc(&backBufferDesc); D3D11_VIEWPORT viewport; viewport.TopLeftX = 0.0f; viewport.TopLeftY = 0.0f; viewport.Width = static_cast(backBufferDesc.Width); viewport.Height = static_cast(backBufferDesc.Height); viewport.MinDepth = D3D11_MIN_DEPTH; viewport.MaxDepth = D3D11_MAX_DEPTH; m_d3dDeviceContext-RSSetViewports(1, &viewport); } }; // This class defines how to create the custom View Provider defined above. ref class Direct3DTutorialViewProviderFactory : IViewProviderFactory { public: IViewProvider^ CreateViewProvider() { return ref new Direct3DTutorialViewProvider(); } }; [Platform::MTAThread] int main(array^) { auto viewProviderFactory = ref new Direct3DTutorialViewProviderFactory(); Windows::ApplicationModel::Core::CoreApplication::Run(viewProviderFactory); return 0; }

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • SQL SERVER – The Difference between Dual Core vs. Core 2 Duo

    - by pinaldave
    I have decided that I would not write on this subject until I have received a total of 25 questions on this subject. Here are a few questions from the list: Questions: What is the difference between Dual Core and Core 2 Duo? Which one is recommended for SQL Server: Core 2 Duo or Dual Core? Can I upgrade my Dual Core to Core 2 Duo? If Dual Core has 2 CPUs, how many CPUs does Core 2 Duo have? Is it true that Core 2 Duo and Dual Core meant the same thing? Well, let us see the answer. Optimistically, I would be directing everybody to this blog post if I receive a question of the same kind sometime in the future. To verify the information that I provide, visit Intel’s site. For additional information regarding the subject, visit Wikipedia. My Answer: Any computer that has two CPUs or two “cores“ is known as Dual Core. Core Duo is a brand name of Intel for Dual Core. Core 2 Duo is simply a higher version of Core Duo. (e.g. for Pentium brand, it`s like Pentium I, Pentium II, etc.) The computer I am using now has Core 2 Duo. Intel has launched a new brand, which they call i3, i5, and i7.  Here, the numbers are not related to the number of cores; rather, they show the range of the CPU. I3 is of low range and i7 is of high range. Feel free to add more details by adding valuable comments here. And if you still want to ask why I created this blog post, well, I mentioned that I was waiting for 25 questions threshold to hit, before I write about this subject which I didn`t really plan to write about. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >