Building a CRUD API with Python and GraphQL: A Comprehensive Guide

Building a CRUD API with Python and GraphQL


GraphQL provides an elegant alternative to REST for developing flexible APIs. Combined with Python on the backend, you can rapidly create full-featured CRUD APIs for your applications.

In this comprehensive tutorial, we’ll walk through building an example CRUD (Create, Read, Update, Delete) API with:

  • Python for the backend implementation
  • Graphene GraphQL for the API layer
  • SQLAlchemy for interacting with a PostgreSQL database
  • Docker to containerize the app

We’ll see how GraphQL simplifies creating complex, nested API queries compared to REST. The entire codebase will be available on GitHub for reference.

Follow along to learn how Python and GraphQL provide the perfect stack for crafting extensible backends.

Overview of GraphQL

GraphQL is a query language created by Facebook to streamline client-server interactions. Some key features:

  • Structured queries – Clients can request nested data in a single call rather than multiple REST endpoints.
  • Strongly typed schema – The GraphQL schema defines an API contract between client and server.
  • Client-specified queries – Client queries determine the shape and volume of data returned.
  • Runtime introspection – Developers can interactively explore the entire schema.
  • Built-in documentation – The schema generates interactive API documentation.

Together, this allows flexible yet controlled access to complex data requirements.

Our Python server will leverage the Graphene library to implement a GraphQL schema connecting to SQLAlchemy models.

Python and GraphQL Project Overview

We’ll be building an API for a simple blogging application with authors writing posts. The data model contains:

  • Author: Represents a blog author
    • id: Primary key
    • name: Author’s name
    • email: Author’s email
  • Post: Represents a blog post
    • id: Primary key
    • title: Post title
    • content: Post content
    • published: Post publication status
    • author_id: Foreign key to Author

This will allow CRUD operations for authors and posts including:

  • Creating new authors and posts
  • Querying all authors/posts
  • Getting a single author/post by ID
  • Updating authors/posts
  • Deleting authors/posts

We’ll implement these as GraphQL queries and mutations. Now let’s get coding!

Project Setup

We’ll use Python 3 alongside Graphene, SQLAlchemy, and Docker. Clone the project repo to follow along.

Install dependencies:

pip install "graphene[sqlalchemy]" sqlalchemy psycopg2-binary graphene-sqlalchemy

This covers:

  • graphene – GraphQL framework
  • sqlalchemy – Database ORM
  • psycopg2-binary – PostgreSQL driver
  • graphene-sqlalchemy – Graphene/SQLAlchemy integration

Next, install Docker to spin up our PostgreSQL database,

With dependencies installed, we’re ready to start building!

Modeling the Schema

GraphQL is driven by schemas which define the API’s structure. Let’s start by modeling our author and post entities:


from sqlalchemy import Column, String, Integer, Boolean, ForeignKey
from sqlalchemy.orm import relationship

from .database import Base

class Author(Base):

    __tablename__ = "authors"

    id = Column(Integer, primary_key=True)
    name = Column(String)
    email = Column(String)
    posts = relationship("Post", back_populates="author")

class Post(Base):

    __tablename__ = "posts" 

    id = Column(Integer, primary_key=True)
    title = Column(String)
    content = Column(String)
    published = Column(Boolean)
    author_id = Column(Integer, ForeignKey(""))

    author = relationship("Author", back_populates="posts")

This uses SQLAlchemy’s declarative style to define two models – Author and Post. Each table is represented as a Python class with columns as attributes.

The relationship() calls connect the one-to-many relationship from authors to posts.

This will generate corresponding database tables to persist the models.

Setting Up Graphene GraphQL

Next we’ll configure Graphene which provides GraphQL bindings for our models:


import graphene
from graphene_sqlalchemy import SQLAlchemyObjectType

from .models import Author as AuthorModel, Post as PostModel

# Create Types for Author and Post
class Author(SQLAlchemyObjectType):
   class Meta:
       model = AuthorModel

class Post(SQLAlchemyObjectType):
   class Meta:
       model = PostModel

# Query and Mutation used to execute GraphQL operations
class Query(graphene.ObjectType):

class Mutation(graphene.ObjectType):

schema = graphene.Schema(query=Query, mutation=Mutation)

This creates a GraphQLObjectType for each model using SQLAlchemyObjectType. We add dummy Query and Mutation classes for now. Finally we create a Schema tying together the models with query/mutation capabilities.

Spinning Up PostgreSQL with Docker

For the database, we’ll use PostgreSQL running in a Docker container:

docker run --name crud_postgres -e POSTGRES_PASSWORD=pass123 -p 5432:5432 -d postgres

This launches a container named crud_postgres with a sample password and default PostgreSQL configuration. Our app will connect to it shortly.

Connecting to the Database

With PostgreSQL up, let’s connect our SQLAlchemy models:


from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

# Postgres credentials
POSTGRES_URL = "postgresql://postgres:pass123@localhost:5432/postgres" 

engine = create_engine(POSTGRES_URL)

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

Base = declarative_base()

We define the Postgres connection URL with credentials and connect SQLAlchemy to it using create_engine(). A SessionLocal is defined to initiate database sessions. Finally, a Base class is created for the model declarations we defined earlier.

With this wiring, our code can now interact with the Postgres database!

GraphQL Queries for Reading Data

GraphQL queries allow clients to request exactly the data they need. Let’s implement some read queries for author and post data.

First, we need resolvers that map schema fields to database lookups:


from .models import Author, Post

def resolve_all_authors(parent, info):
    return Session.query(Author).all()

def resolve_author_by_id(parent, info, id):
    return Session.query(Author).get(id)

# ... Post resolvers

We’ve defined allAuthors and authorById root query fields that clients can now request. The resolve_* methods map to the resolver functions we defined earlier.

With this wiring, clients can query data like:

query {
  allAuthors {

query {
  authorById(id: 1) {

The GraphQL layer translates these into efficient SQL via SQLAlchemy!

Mutations for Changing Data

Along with queries, we need mutations to make changes by creating/updating/deleting records.

Mutations follow a similar resolver pattern:

def create_author(parent, info, name, email): 
  author = Author(name=name, email=email)

  return author

class Mutation(graphene.ObjectType):
  create_author = graphene.Field(

  # Other mutations

  def resolve_create_author(parent, info, name, email):
    return create_author(name, email)

The resolve_* method calls the resolver function to persist changes and return results.

This enables GraphQL mutations like:

mutation {
  createAuthor(name: "John Doe", email: "") {

Similarly, we can build resolvers and mutations for updating, deleting, and creating posts.

Trying the GraphQL API

Our GraphQL schema is complete! Let’s take it for a spin:

  1. Run the app: python app.pyNavigate to http://localhost:5000/graphql

This opens the GraphQL Playground – an interactive editor for testing queries and mutations.For example, creating an author:

mutation {
  createAuthor(name: "John", email: "") {

And fetching all authors:

query {
  allAuthors {

The API allows flexible access and returns only what’s requested.

Deploying with Docker Compose

For production deployment, we can containerize both the Python application and Postgres database using docker-compose:

# docker-compose.yml


    image: postgres:14.4
      - POSTGRES_PASSWORD=pass123
      - '5432:5432'

    build: ./server
    command: python
      - '5000:5000'
      - postgres

Bring up the stack:

docker-compose up

We now have a production-grade GraphQL API serving persisted data!


In this tutorial, we built an end-to-end CRUD API using:

  • GraphQL for the API layer with Graphene
  • SQLAlchemy for database ORM
  • PostgreSQL for persistence
  • Docker for containerizing it

GraphQL’s flexible queried data unlocks efficient APIs without overfetching or underfetching. Combined with Python’s vast ecosystem, it’s become an ideal stack for building robust backends.

There is much more you can do by adding pagination, authentication, data loaders, and combining GraphQL with REST. I hope this tutorial provided a solid foundation for your Python and GraphQL journey!

Frequently Asked Questions

Here are some common questions about building APIs with Python and GraphQL:

Q: Does GraphQL replace REST?

A: Not necessarily. GraphQL is an alternative to REST with its own pros and cons. Many projects integrate both GraphQL and REST.

Q: What are some benefits of GraphQL over REST?

A: Nested data queries, only requesting needed fields, strong typing, runtime introspection, and automatic documentation.

Q: How do you handle authentication with GraphQL?

A: Typically by integrating JWT authentication using headers or context middleware. Many third-party auth libraries are available.

Q: Can I use Apollo Client with Python GraphQL backends?

A: Yes, Apollo works nicely with a Python GraphQL server to handle caching, state management etc. on the client.

Q: Does GraphQL have any limitations compared to REST?

A: Some challenges include caching, older browser support, and complexity for simple use cases. REST is often simpler.

Q: What Python frameworks work well for GraphQL projects?

A: Graphene is the most popular. Flask and Django provide nice web app integration. There are many other good options as well.

Q: Is GraphQL only for serving web clients?

A: No, GraphQL can also be used to build flexible internal APIs consumed by servers rather than browsers.

Leave a Reply

Your email address will not be published. Required fields are marked *