The gRPC (Remote Procedure Call) protocol is an open source and universal framework that enables services to connect to one another and exchange data. It's also used to connect client devices and applications to backend services. The gRPC protocol is built atop HTTP/2 and works at the application layer (Layer 7) of the OSI model—helping organizations quickly scale bi-direction streaming services using simple service definitions. It's also designed to work anywhere regardless of platform or mixed programming languages.

You might've noticed that we didn't define the "g" within gRPC. Google originally developed the framework in 2015, and many people thus equated this letter with the company. However, this association has since faded. And while some believe the "g" represents a naming scheme tied to each gRPC release, it's now widely interpreted as "general-purpose" or "generic."

Google has since handed control and maintenance of gRPC to the Cloud Native Computing Foundation (CNCF). A group of contributors helps steer the project's development, which is publicly available on GitHub.

What makes gRPC useful?

We know that gRPC helps connect clients to the services they're trying to access. Additionally, gRPC offers the following benefits:

  • A platform-agnostic design that works with any programming language

  • Plug-and-play support for functions such as load balancing, tracing, authentication, and health checking

  • Support for last-mile computing in distributed environments (effectively reaching the end user)

  • Bolstered mobile device support

  • Relatively simple configuration and setup

  • A highly-efficient design that works well with modern applications, with low latency and bandwidth consumption

  • Protocol Buffer support for strong message typing and earlier error detection

  • Automated client and server code generation based on service definitions

Multiple organizations—such as Netflix, Square, and Cisco—have adopted gRPC since it debuted. The framework pairs well with microservices architectures that require minimal latency, supports data streaming, and is generally highly performant. 

However, gRPC does have some drawbacks. First, it can be hard to pinpoint gRPC-based application errors and debug them. Second, gRPC isn't fully supported across all browsers, meaning that not all users (nor developers) can leverage its complete feature set. Third, limited tooling and exposure to gRPC means teams are still figuring out how to best leverage and deploy gRPC services.

How does gRPC work?

A typical gRPC programming API starts with a service definition that outlines the API's functionality and data structures. The Protocol Buffers responsible for serializing structured data are both smaller and faster than JSON while sharing many similarities. These protocol buffers are fed into a generator plugin that translates them into usable source code. 

The steps involved in setting up and using a gRPC-based application are as follows: 

  1. If you're making a new gRPC service, write the protobuf specification for the service.

  2. Set up a gRPC service on a backend written in one of many supported languages.

  3. Configure a proxy which supports HTTP/2 (and optionally HTTP/3), which allows clients to reach the gRPC service configured in the previous step.

Since the framework is built atop HTTP/2, gRPC works well with many load balancers and reverse proxies—even supporting direct, client-side load balancing without external software. It supports a number of programming languages with cross-language support to improve usability. It also runs on Linux, Mac, Windows, iOS, and Android. It's commonly used with microservices architectures and leverages a channeling mechanism with a defined port to connect clients and corresponding web servers.

However, not all flavors of gRPC are identical. First, the gRPC framework supports four different types of service definitions based on your needs: 

  1. Unary RPCs – Resembling a typical function call, the client makes one request and receives one response back from the web server. 

  2. Server-streaming RPCs – The client sends one request and receives a streamed sequence of messages back in order to read. Once the messages are all read, the stream closes until another is opened. 

  3. Client-streaming RPCs – The client sends a sequence of messages to the web server and waits for the server to read them. It then awaits a response. 

  4. Bidirectional RPCs – Both client and server send each other message streams using a read-write stream. However, these streams are independent of each other and read or write in any order. The combination of ordering and gRPC procedures can greatly vary based on the underlying application, and is configurable.

Second, the gRPC framework can work either synchronously or asynchronously to avoid blocking. This latter option is important to take full advantage of asynchronous networks. Third, gRPC message ordering is always guaranteed no matter which type of RPC is used. Finally, you can cancel a gRPC call instantly at any time.

Does HAProxy support gRPC?

Yes! HAProxy products support Layer 7 (application layer) routing and load balancing for gRPC calls between services—including support for Protocol Buffers and the ungrpc converter for extracting information from messages. 

To learn more about gRPC support in HAProxy, check out our gRPC documentation or Your Comprehensive Guide to HAProxy Protocol Support.