Two years ago, when defining the test architecture for a new project, I evaluated the available API testing tools. Postman was the immediate industry default, and I possessed theoretical experience with it from postgraduate studies and manual request verification in previous roles. However, the new project required a robust automation strategy rather than just manual checks.

I rejected Postman primarily due to its architectural shift toward mandatory cloud synchronization and user authentication. While the company could have covered the costs of Enterprise licenses, the workflow constraints imposed by cloud-synced collections were suboptimal for our collaborative needs. I briefly considered Insomnia, but at that time, it was also enforcing a similar login-walled cloud model.

My research led to Bruno. Although it was in a beta state at the time, it offered the core functionality we needed, demonstrated stability, and was supported by an active GitHub community. This promise of an offline-first approach convinced me to implement it.

What is Bruno

Bruno is an open-source API client available for macOS, Windows, and Linux. It supports both REST and GraphQL architectures. The application operates on an “offline-first” principle.

Unlike platform-based solutions that default to storing collections in a proprietary cloud or require account creation, Bruno writes collections directly to the local folder structure. This design choice aligns the testing workflow with standard software development practices.

Git-Native Collaboration

The fundamental difference between Bruno and tools like Postman lies in version control management. Traditional clients often export collections as monolithic JSON files, which leads to merge conflicts when multiple engineers modify the same collection.

Bruno utilizes a custom DSL (Domain Specific Language) to save each request as an individual text file with a .bru extension. Starting with version 3.0.0, the tool also supports the OpenCollection YAML format (an open specification created by Bruno for defining executable API collections), which offers even more readability. However, my project currently retains the .bru format, pending the release of an automated migration tool for existing collections.

The .bru File Structure

A standard request file contains metadata, the request definition, pre-request, post-response, assertions, and tests in a readable format:

meta {
  name: Create Product
  type: http
  seq: 1
}

post {
  url: [https://fakestoreapi.com/products](https://fakestoreapi.com/products)
  body: json
  auth: none
}

body:json {
  {
      "title": "test help",
      "price": 13.5,
      "description": "lorem ipsum se",
      "image": "[https://i.pravatar.cc](https://i.pravatar.cc)",
      "category": "electronic"
  }
}

script:post-response {
  const response = res.getBody();
  
  bru.setGlobalEnvVar("productId", response.id);
  bru.setEnvVar("price", response.price);
}

tests {
  const response = res.getBody();
  
  test("should create a new product", function () {
    expect(res.getStatus()).to.equal(201);
  });
  
  test("should return product data", function () {
    expect(response.title).to.equal("test help");
    expect(response.description).to.equal("lorem ipsum se");
    expect(response.price).to.equal(13.5);
    expect(response.image).to.equal("[https://i.pravatar.cc](https://i.pravatar.cc)");
    expect(response.category).to.equal("electronic");
  });
}

This structure enables:

  • Granular diffs: Code reviews show specific changes to parameters or logic.
  • Readability: Files are much easier to read than exported Postman JSON collections.
  • Conflict resolution: Standard Git tools handle merge conflicts at the file level.
  • Version Control Integration: Tests reside in any Git-based repository (GitHub, GitLab, Bitbucket), enabling seamless execution within automation pipelines.

CLI and Automation Capabilities

The bru-cli interface enables test execution within any CI/CD pipeline, offering arguments for selecting target environments (e.g., Staging, Production) and defining specific collections or folders to execute.

For reporting, the CLI supports exporting results in JSON, HTML, and JUnit XML formats. The JUnit output adheres to standard schemas, making it compatible with visualization platforms such as Allure. This allows for the generation of comprehensive test reports and historical trend analysis without additional parsers.

The following example demonstrates running a collection and generating a JUnit report:

bru run ./tests/payments-collection --env Production --reporter-junit results.xml

Test autmation with JavaScript and npm packages

Bruno supports scripting with JavaScript and npm packages, enabling complex test setups involving data generation or multi-step verification flows. As demonstrated in the example above, the assertion layer relies on the Chai library, aligning with industry standards. This extensive compatibility with the JavaScript ecosystem significantly facilitates the automation of test scenarios.

Conclusion

After two years of production use, the tool has fully met our key requirements. The most critical aspect—streamlining the review of tests and requests during the Merge Request process—has proven invaluable as our project approaches 200 MRs. The clear text-based diffs, combined with reliable execution in CI/CD pipelines and the ease of maintaining test logic, confirm that Bruno is a robust automation tool.

I plan to share more practical tips and patterns for testing with Bruno in future posts. It provides exactly what was missing in the market: a transparent, Git-native alternative to Postman that respects the developer’s need for local file management.