Search Results

Search found 4009 results on 161 pages for 'protocol buffers'.

Page 6/161 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How Spanning Tree Protocol detects Loops

    - by AMIT
    For last few days I've been reading about Spanning Tree Protocol ,L2 protocol and understood how it prevents loop in network ,various steps in STP but one thing i wanted to know how STP actually detects the loops in network so that it can prevent it.Somewhere I read STP uses BPDU as probe and detects loops I mean how it happen is when switch send a BPDU with Destination Address as multicast and receive same BPDU again mean there is loop in network . But is it how STP detects loops in network?

    Read the article

  • Is there a way to force HTTPS protocol in a Kohana 2.3 site?

    - by alex
    I've got a site built on top of Kohana 2.3 which I now have to make all links https. I set this in application/config/config.php. $config['site_protocol'] = 'https'; This makes all links on the site use the https protocol. Except, when I first enter the site via http, it will not automatically forward to https. Is there a way to make Kohana do this, or do I just need to do some custom coding? I've found this .htaccess rule too, will it be fine to just drop this in? RewriteEngine On RewriteCond %{SERVER_PORT} !=443 RewriteRule ^ https://yourdomain.tld%{REQUEST_URI} [NS,R,L] Thanks.

    Read the article

  • Where can I find the transaction protocol used by Automated Teller Machines?

    - by Dave
    I'm doing a grad-school software engineering project and I'm looking for the protocol that governs communications between ATMs and bank networks. I've been googling for quite a while now, and though I'm finding all sorts of interesting information about ATMs, I'm surprised to find that there seems to be no industry standard for high-level communications. I'm not talking about 3DES or low-level transmission protocols, but something along the lines of an Interface Control Document; something that governs the sequence of events for various transactions: verify credentials, withdrawal, check balance, etc. Any ideas? Does anything like this even exist? I can't believe that after all this time the banks and ATM manufacturers are still just making this up as they go. A shorter question: if I wanted to go into the ATM software manufacturing business, where would I start looking for standards?

    Read the article

  • How can I register a custom protocol with xdg ?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • How can I register a custom protocol with xdg?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • Oracle Error ORA-12560 TNS:Protocol Adapter error?

    - by David Basarab
    I am using Oracle Database 10g. Both Servers are Windows 2003. I have an Orcale Database set up on one server. Here is the TNSNames.ora from the server with the database. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL.VIRTUALHOLD.COM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) The Environmental Variables on the Server are ORACLE_HOME = C:\oracle\product\10.2.0\db_1 ORACLE_SID = orcl I am trying to connect to it from another box that has Oracle Client installed. Here is the tnsnames.ora installed on the other client server. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\client_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orcl) ) ) ORACLE_HOME = C:\oracle\product\10.2.0\client_1 ORACLE_SID = orcl Locally on the database server I can connect to through sqlplus with no issues. On the client machine I keep getting the error: ORA-12560: TNS:protocol adapter error What am I missing? Does the client TNSNames.ora need to be different?

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • OpenSSL force client to use specific protocol

    - by Ex Umbris
    When subversion attempts to connect to an https URL, the underlying protocol library (openssl) attempts to start the secure protocol negotiation at the most basic level, plain SSL. Unfortunately, I have to connect to a server that requires SSL3 or TLS1, and refuses to respond to SSL or SSL2. I’ve done some troubleshooting using s_client and confirmed that if I let s_client start with the default protocol the server never responds to the CLIENT HELLO: $ openssl s_client -connect server.domain.com:443 CONNECTED(00000003) write:errno=104 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 320 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Watching this in Wireshark I see: Client Server -------syn----------> <------ack----------- ---CLIENT HELLO-----> <------ack----------- [60 second pause] <------rst----------- If I tell s_client to use ssl2 the server immediately closes the connection. Only ssl3 and tls1 work. Is there any way to configure openssl to skip SSL and SSL2, and start the negotiation with TLS or SSL3? I've found the OpenSSL config file, but that seems to control only certificate generation.

    Read the article

  • Objective-C Pointer to class that implements a protocol

    - by Winder
    I have three classes which implement the same protocol, and have the same parent class which doesn't implement the protocol. Normally I would have the protocol as pure virtual functions in the parent class but I couldn't find an Objective-C way to do that. How can I utilize polymorphism on these subclasses and call the functions implemented in the protocol without warnings? Some pseudocode if that didn't make sense: @interface superclass: NSObject {} @interface child1: superclass<MyProtocol> {} @interface child2: superclass<MyProtocol> {} The consumer of these classes: @class child1 @class child2 @class superclass @interface SomeViewController: UIViewController { child1 *oneView; child2 *otherView; superclass *currentView; } -(void) someMethod { [currentView protocolFunction]; } The only nice way I've found to do pure virtual functions in Objective-C is a hack by putting [self doesNotRecognizeSelector:_cmd]; in the parent class, but it isn't ideal.

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • How to Consume a WebService(created by C#) using Https protocol

    - by Navaneeth A Krishnan
    I'm developing a small project, that is an C# web service, i did that but now i want to run the web service using the protocol HTTPS, for that i have installed web authentication certificate in my system and my IIS 5.1 server is running under HTTPS protocol(i have configured in that directory security) But now i want to invoke the web service using the HTTPS protocol, somebody told that, i need to modify the WSDL file for that web service but i don't know how to do it... now my service url is like this.... http://localhost:2335/SWebService.asmx here i would like to use https instead of http

    Read the article

  • Modeling software for network serialization protocol design

    - by Aurélien Vallée
    Hello, I am currently designing a low level network serialization protocol (in fact, a refinement of an existing protocol). As the work progress, pen and paper documents start to show their limits: i have tons of papers, new and outdated merged together, etc... And i can't show anything to anyone since i describe the protocol using my own notation (a mix of flow chart & C structures). I need a software that would help me to design a network protocol. I should be able to create structures, fields, their sizes, their layout, etc... and the software would generate some nice UMLish diagrams.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • Is it only possible to display 64k vertices on the monitor with 16bit?

    - by Aufziehvogel
    I did the first 3D tutorial over at riemers.net and stumbled upon that my graphic card only supports Shader 2.0 (Reach profile in XNA) which means I can only use Int16 to store the indices (triangle to vertex). This means that I can only store 2^16 = 65536 vertices. Also I read on the internet that you should prefer 16-bit over 32-bit because not all hardware (like mine) does support 32-bit. Yet, I am wondering: Do really all game scenes get along with only so little vertices? I though already faces of people used a lot of polygons (which are made up of vertices?). It’s not relevant for me yet, but I am interested: Do game scenes use only 65536 vertices? Do you use some trade-off to display more (e.g. 64k in GPU buffer rest on RAM) Is there some method to get more into the GPU buffer? I already read on some other posts that there seems to be a limit of 64k per mesh too, so maybe you can compact stuff to meshes?

    Read the article

  • Protocol specification in XML

    - by Mathijs
    Is there a way to specify a packet-based protocol in XML, so (de)serialization can happen automatically? The context is as follows. I have a device that communicates through a serial port. It sends and receives a byte stream consisting of 'packets'. A packet is a collection of elementary data types and (sometimes) other packets. Some elements of packets are conditional; their inclusion depends on earlier elements. I have a C# application that communicates with this device. Naturally, I don't want to work on a byte-level throughout my application; I want to separate the protocol from my application code. Therefore I need to translate the byte stream to structures (classes). Currently I have implemented the protocol in C# by defining a class for each packet. These classes define the order and type of elements for each packet. Making class members conditional is difficult, so protocol information ends up in functions. I imagine XML that looks like this (note that my experience designing XML is limited): <packet> <field name="Author" type="int32" /> <field name="Nickname" type="bytes" size="4"> <condition type="range"> <field>Author</field> <min>3</min> <max>6</min> </condition> </field> </packet> .NET has something called a 'binary serializer', but I don't think that's what I'm looking for. Is there a way to separate protocol and code, even if packets 'include' other packets and have conditional elements?

    Read the article

  • Which protocol do clients use when communicating with servers in a SAN

    - by Mario De Schaepmeester
    I'm trying to wrap my head around how a SAN works and how it is implemented. If I understand this well, clients wanting to access the storage devices in a SAN need to communicate with the servers via the LAN. When the SAN is implemented with Fibre Channel, these servers are Fibre Channel compliant devices, and internally in the SAN they work with the Fibre Channel Protocol. Both data and communications are supported by Fibre Channel. But which application-layer protocol do the clients use in the LAN to communicate with the servers? Is the data simply transferred via ethernet as well? This is some part I am stuck on. I went trough a lot of sources but most sources don't really mention protocols and if they do, they only mention FCP.

    Read the article

  • Protocol Security With PPTP

    - by why
    I find these words in pptp client source : Summary by Peter Mueller PPTP is known to be a faulty protocol. The designers of the protocol, Microsoft, recommend not to use it due to the inherent risks. Lots of people use PPTP anyway due to ease of use, but that doesn't mean it is any less hazardous. The maintainers of PPTP Client and Poptop recommend using OpenVPN (SSL based) or IPSec instead. (Posted on [1]2005-08-10 to the [2]mailing list) But as far as i know, there are many people use PPTP as a VPN, because there is no need to install client on windows, what do you think about pptp ?

    Read the article

  • Alternate Client for Cisco Unified Personal Communicator protocol

    - by Jason M
    At work we have an in-house chat system using CUPC. Does anyone else out there use this? There are a few things I do not like about this client: Where's the chat log? If I close the window, I have no way of getting my conversation back. Tabbed interface? That would be nice. I hate having multiple chat windows up, having to arrange them around my desktop as more people start talking to me. I don't like that I have to use this one-off application for particular this protocol when other chat clients will handle 99% of the other protocols I use. Tell me: Is the protocol an open standard for which other applications have support? (pidgin, adium, digsby, etc.) If not, can I overcome these issues from within CUPC? Perhaps there are newer versions of the client that overcome these issues.

    Read the article

  • Technically, How does uploading Apps to an Android Phone Work?

    - by unixman83
    Amazon.com has an Android Marketplace. How do the apps go from Amazon.com to my phone? I am looking for a protocol level analysis. Do they use a basic protocol like FTP and then check with a Google digital signature? I do not own an Android. I wish for an explanation of how the protocol operates because I want to provide an android app for free download for my users off my website, like Amazon does.

    Read the article

  • Has Little Endian won?

    - by espertus
    When teaching recently about the Big vs. Little Endian battle, a student asked whether it had been settled, and I realized I didn't know. Looking at the Wikipedia article, it seems that the most popular current OS/architecture pairs use Little Endian but that Internet Protocol specifies Big Endian for transferring numeric values in packet headers. Would that be a good summary of the current status? Do current network cards or CPUs provide hardware support for switching byte order?

    Read the article

  • audio and video data in RTP

    - by Banana
    Suppose a user wants to transmit both audio and video to another user, whose formats are AMR for audio and H.264 for video. Does the user have to transmit audio and video packets always separately? Meaning that it is not possible to mix audio and video within the same RTP packed, is that correct? If this is true I guess the RTP protocol will need to know the SSRC of both audio and video to be able to check the sync of the two streams.

    Read the article

  • Vim searching through all existing buffers

    - by anon
    When dealing with a single file, 'm sued to: /blah do some work n do some work n do some work Suppose now, I want to search for some pattern over all buffers loaded in Vim, do some work on them, and move on. What commands do I use for this work flow?

    Read the article

  • is depth buffers mandatory

    - by numerical25
    I am just trying to better understand the directX pipeline. Just curious if depth buffers are mandatory in order to get things work. Or is it just a buffer you need if you want objects to appear behind one another.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >