Search Results

Search found 5612 results on 225 pages for 'communication protocol'.

Page 5/225 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Un malware paralyse les systèmes de communication d'un réseau d'ambulances desservent 90 % de la Nouvelle-Zélande

    Un malware paralyse les systèmes de communication d'un réseau d'ambulances Desservent 90 % de la Nouvelle-Zélande Une attaque de malware a réussi de paralyser les systèmes de réponse automatisés de toute une constellation d'ambulances en Nouvelle-Zélande. Le blocage des centres de communication St John, qui desservent 90 % des ambulances au pays, les a contraints à allouer manuellement les ambulanciers avec des systèmes-radio classiques, et ce pendant plus de 24 heures. Les problèmes ont commencé mercredi passé quand un malware non identifié a infecté les systèmes des centres partout dans le pays, pourtant protégé par antivirus, précise Alan Goudge, manager des opér...

    Read the article

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Microsoft Press Deal of the Day - 9/Jul/2012 - Windows® Communication Foundation 4 Step by Step

    - by TATWORTH
    Today's Deal of the Day from Microsoft Press at http://shop.oreilly.com/product/0790145302403.do?code=MSDEAL is Windows® Communication Foundation 4 Step by Step"Teach yourself the essentials of Windows Communication Foundation (WCF) 4 -- one step at a time. With this practical, learn-by-doing tutorial, you get the clear guidance and hands-on examples you need to begin creating Web services for robust Windows-based business applications. Discover how to: Build and host SOAP and REST servicesMaintain service contracts and data contractsControl configuration and communications programmaticallyImplement message encryption, authentication, and authorizationManage identity with Windows CardSpaceBegin working with Windows Workflow Foundation to create scalable and durable business servicesImplement service discovery and message routingOptimize performance with service throttling, encoding, and streamingIntegrate WCF services with ASP.NET clients and enterprise services components"  Note the comment:Use code: MSDEAL

    Read the article

  • Access Control Service: Protocol and Token Transition

    - by Your DisplayName here!
    ACS v2 supports a number of protocols (WS-Federation, WS-Trust, OpenId, OAuth 2 / WRAP) and a number of token types (SWT, SAML 1.1/2.0) – see Vittorio’s Infographic here. Some protocols are designed for active client (WS-Trust, OAuth / WRAP) and some are designed for passive clients (WS-Federation, OpenID). One of the most obvious advantages of ACS is that it allows to transition between various protocols and token types. Once example would be using WS-Federation/SAML between your application and ACS to sign in with a Google account. Google is using OpenId and non-SAML tokens, but ACS transitions into WS-Federation and sends back a SAML token. This way you application only needs to understand a single protocol whereas ACS acts as a protocol bridge (see my ACS2 sample here). Another example would be transformation of a SAML token to a SWT. This is achieved by using the WRAP endpoint – you send a SAML token (from a registered identity provider) to ACS, and ACS turns it into a SWT token for the requested relying party, e.g. (using the WrapClient from Thinktecture.IdentityModel): [TestMethod] public void GetClaimsSamlToSwt() {     // get saml token from idp     var samlToken = Helper.GetSamlIdentityTokenForAcs();     // send to ACS for SWT converion     var swtToken = Helper.GetSimpleWebToken(samlToken);     var client = new HttpClient(Constants.BaseUri);     client.SetAccessToken(swtToken, WebClientTokenSchemes.OAuth);     // call REST service with SWT     var response = client.Get("wcf/client");     Assert.AreEqual<HttpStatusCode>(HttpStatusCode.OK, response.StatusCode); } There are more protocol transitions possible – but they are not so obvious. A popular example would be how to call a REST/SOAP service using e.g. a LiveId login. In the next post I will show you how to approach that scenario.

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • Apache thrift, struct contain itself

    - by mamu
    I am looking into thrift for serialization of data. But Document says cyclic structs - Structs can only contain structs that have been declared before it. A struct also cannot contain itself One of our requirement is Struct A List of Child items Items(Items are Struct A ) So reading requirement i can't have Struct within itself at any level? can i have it in cyclic model as i have it above. Struct is not member of Struct directly but it has some other member and it contains struct. Their document is not so well descriptive. Is it possible in Thrift? Does protobuf supports it?

    Read the article

  • SQL SERVER – Standards Support, Protocol, Data Portability – 3 Important SQL Server Documentations for Downloads

    - by pinaldave
    I have been working with SQL Server for more than 8 years now continuously and I like to read a lot. Some time I read easy things and sometime I read stuff which are not so easy.  Here are few recently released article which I referred and read. They are not easy read but indeed very important read if you are the one who like to read things which are more advanced. SQL Server Standards Support Documentation The SQL Server standards support documentation provides detailed support information for certain standards that are implemented in Microsoft SQL Server. Microsoft SQL Server Protocol Documentation The Microsoft SQL Server protocol documentation provides technical specifications for Microsoft proprietary protocols that are implemented and used in Microsoft SQL Server 2008. Microsoft SQL Server Data Portability Documentation The SQL Server data portability documentation explains various mechanisms by which user-created data in SQL Server can be extracted for use in other software products. These mechanisms include import/export functionality, documented APIs, industry standard formats, or documented data structures/file formats. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • B2B communication using IBM MQ

    - by Dheeraj Kumar M
    Oracle B2B 11g, provides the out-of-the box ability to connect to IBM MQ to exchange the message. This is support is provided via JMS offering of Oracle B2B. This is an addition to the stack of existing communication capabilities of B2B with trading partners. There are 2 ways of connecting to IBM MQ using B2B 1. Credential based connectivity 2. .bindings based connectivity As a pre-requisite to connect to IBM MQ, it is required to provide the following libraries in classpath: a. com.ibm.mqjms.jar b. dhbcore.jar c. com.ibm.mq.jar d. com.ibm.mq.jmqi.jar e. mqcontext.jar f. com.ibm.mq.pcf.jar g. com.ibm.mq.commonservices.jar h. com.ibm.mq.headers.jar i. fscontext.jar j. jms.jar Add the above jars into domain library directory and the directory usually located at $DOMAIN_DIR/lib. The jars located in this($DOMAIN_DIR/lib) directory will be picked up and added dynamically to the end of the server classpath at server startup. For eg. /user_projects/domains//lib/ Alternatively the above jar’s can also be added as part of the setDomainEnv.sh Credential based connectivity : Outbound: : Configure the trading partner delivery channel for using "Generic JMS" protocol Inbound: : Configure the internal delivery channel for using "Generic JMS" protocol with the following details: Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=<host>:<QM Listen port>/<MQ Channel Name>; User NameMQ User Name passwordMQ password .bindings based connectivity As a pre-requisite, get/generate the .bindings file in MQServer. This can be done by MQ Administrator Set the following values in the respective delivery channel for outbound / inbound Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=file:///<location of .bindings file>;

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • "The protocol 'net.msmq' is not supported."

    - by Randolpho St. John
    OMG, a new lesson! Will wonders never cease? So I ran into an interesting issue setting up a WCF service to consume an MSMQ queue. I won't bother you with the details of how to actually build a WCF/MSMQ service; there are plenty of tutorials on the subject. I want to share with you an interesting error that I ran into and the surprisingly simple fix. The error occurs when attempting to generate a Service Reference or even simply browsing to the WSDL of your WCF/MSMQ service in the form of a YSOD with the following error: "The protocol 'net.msmq' is not supported." After a lot of Googling on the subject turning up plenty of questions with the same error but no answers. So I went digging into some application level config files on a server that already had a WCF/MSMQ service successfully set up by the network admin, and the answer was amazingly simple: If you are hosting an MSMQ/WCF service in IIS, you have to tell IIS to allow net.msmq protocol. It's in the advanced settings for the application or site in which you are hosting the service. .... aaaand, that's it.

    Read the article

  • Is there a standard mapping between JSON and Protocol Buffers?

    - by Daniel Earwicker
    From a comment on the announcement blog post: Regarding JSON: JSON is structured similarly to Protocol Buffers, but protocol buffer binary format is still smaller and faster to encode. JSON makes a great text encoding for protocol buffers, though -- it's trivial to write an encoder/decoder that converts arbitrary protocol messages to and from JSON, using protobuf reflection. This is a good way to communicate with AJAX apps, since making the user download a full protobuf decoder when they visit your page might be too much. It may be trivial to cook up a mapping, but is there a single "obvious" mapping between the two that any two separate dev teams would naturally settle on? If two products supported PB data and could interoperate because they shared the same .proto spec, I wonder if they would still be able to interoperate if they independently introduced a JSON reflection of the same spec. There might be some arbitrary decisions to be made, e.g. should enum values be represented by a string (to be human-readable a la typical JSON) or by their integer value? So is there an established mapping, and any open source implementations for generating JSON encoder/decoders from .proto specs?

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • drda protocol specs

    - by Alon Rew
    When connecting to a server using the DRDA protocol, is it true that the first Client-To-Server command MUST be EXCSAT chained with ACCSEC? I found 2 different answers when I googled it. If you look at The Open Group web site (https://collaboration.opengroup.org/dbiop/) it can be understood that the answer is NO. However, if you look at the IBM website (http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.ims11.doc.apr%2Fims_ddm_excsat.htm) you can understand the answer is YES. So which is it?

    Read the article

  • SQLAuthority News Microsoft SQL Server Protocol Documentation Download

    The Microsoft SQL Server protocol documentation provides detailed technical specifications for Microsoft proprietary protocols (including extensions to industry-standard or other published protocols) that are implemented and used in Microsoft SQL Server to interoperate or communicate with Microsoft products.The documentation includes a set of companion overview and reference documents that supplement [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WebSocket Protocol Updated

    WebSocket is "TCP for the Web," a next-generation full-duplex communication technology for web applications being standardized as a part of Web Applications 1.0 . The WebSocket protocol is...

    Read the article

  • Thoughts and rambling on the X protocol

    <b>jd:/dev/blog:</b> "Two years ago, while working on awesome, I joined the Freedesktop initiative to work on XCB. I had to learn the arcane of the X11 protocol and all the mysterious and old world that goes with it."

    Read the article

  • How to create a protocol at runtime in Objective-C?

    - by Jared P
    Hi, First of all, I want to be clear that I'm not talking about defining a protocol, and that I understand the concept of @protocol someprotocol - (void)method; @end I know that the Obj-C runtime allows creation of classes at RUNTIME, as well as its ivars and methods. Also available for creation are SEL-s. I think I'm just missing something, but does anyone know what function to call to create a protocol at runtime? The main reason for this is for conformsToProtocol: to work, so just adding the appropriate methods doesn't really cut it.

    Read the article

  • I need an efficient protocol between webservices that are more or less supported by all major langua

    - by corgrath
    Hey all. I am looking for a fast and efficient protocol that can be used between different web services to send text-data (not binary data). Doesn't matter if the protocol is binary or text base. Some conditions: I has to be more "efficient" than normal XML which adds a lot of extra data and the tools to read/write is too heavy It has to be "supported" by most major languages, meaning it cannot only be available for one specific language. At the moment, both Java and PHP have to be able to talk to each other using this protocol. I have already looked at: XML - which I am currently using. Hessian 2 -which works perfectly in Java, but the PHP-support is out of date JSON -the different between JSON and XML is only minor Any suggestions are welcome!

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Public Speaking in software development.

    - by mummey
    Greetings my fellow cubicle dwellers. I've found my role gradually change from "feature-maintainer" to "feature-developer". While much of the former would consist of fixing and/or updating an existing feature (and quietly grumbling about it's implementation with complete naiveté), in this new role I find: Have to communicate with immediate management to define the development requirements to turnaround the new feature Have to communicate with design to determine the user requirements of the new feature Have to communicate with QA to determine test sets for the new feature, as well as it's current state during development. Have to communicate with producers/project-managers to define remaining turnaround time as well as updates in development requirements. and finally, have to occasionally communicate with upper-management to defend the new feature and demonstrate it's minimized risk to the upcoming release. The last item is key here, and this took me a couple occasions to completely realize. In all, though, it becomes very apparent that communication skills ARE important, even or especially as such for developers who feel they 'own' the feature they're working on. All of this said, I recognize it's importance and would like to improve my skills in this area further. I enjoy one-on-one communication but find I tend to stutter a bit when speaking to any group larger than a few people I know well. Where can I find good resources to improve my own communication skills?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >