Working with Large APIs
Some APIs have fairly large schemas, and this introduces some performance challenges for cynic. Runtime performance should be unaffected, but it can lead to extended compile times and make rust-analyzer less responsive than it would otherwise be.
There's several tricks to help with this though:
Registering Schemas with rkyv
If you're not already you should be pre-registering your schema.
You should also enable the rkyv
feature flag in cynic_codegen
. This allows
the pre-registration to store schemas in an optimised format, which avoids a
lot of comparatively slow parsing.
Splitting Crates
Definitely consider moving your schema module into a separate crate. These modules contain a lot of generated code and are quite expensive to compile. Moving them to a separate crate should reduce the chance that unrelated changes cause it to recompile. Note that you'll need to register your schema in both the schema module crate and any crate where you use the cynic derives.
You can also consider moving your query structs into their own crate, for reasons similar to the above. Though it may be worth testing whether this actually helps - with rkyv turned on these shouldn't be too slow. But it really depends on how many of them you have.
Example Workspace Setup
All subheadings are clickable and lead to executable Rust crates that follow the corresponding snippets.
$ tree . -I target -I Cargo.lock
. <-- workspace root
├── Cargo.toml
├── README.md
├── api <-- uses query structs
│ ├── Cargo.toml
│ └── src
│ └── main.rs
├── query <-- define query structs and check against schema
│ ├── Cargo.toml
│ ├── build.rs
│ └── src
│ └── lib.rs
└── schema <-- generate schema structs
├── Cargo.toml
├── build.rs
└── src
└── lib.rs
Export schema generated by #[cynic::schema(...)]
cynic-codegen
is used in the build.rs
to register the schema.
Make sure to enable the rkyv
feature flag for faster compilation!
/// Register github schema for creating definitions
fn main() {
cynic_codegen::register_schema("github")
.from_sdl_file("../../schemas/github.graphql")
.expect("Failed to find GraphQL Schema");
}
The next step is to generate and export the large GitHub GraphQL Schema from lib.rs
as a reusable module.
The module can be named however you like at this point, but either schema
or an identifier related to the registered schema is recommended.
/// Cache and export large github schema, as it is expensive to recreate.
#[cynic::schema("github")]
pub mod github { }
Deriving queries with #[derive(cynic::QueryFragment)]
Queries require metadata generated by cynic::register_schema
, so the same incantation used in schema/build.rs
must be repeated for query/build.rs
.
/// Register github schema for creating structs for queries
fn main() {
cynic_codegen::register_schema("github")
.from_sdl_file("../../schemas/github.graphql")
.expect("Failed to find GraphQL Schema");
}
Next, create a lib.rs
file within the query
crate that begins with the following lines.
use cynic; // Import for derive macros
use schema::github as schema; // Rename is vital! Must import as schema!
As indicated by the second comment, when importing the codegenned schema from its crate, its module must be named schema
, otherwise the upcoming #[derive(cynic::QueryFragment)]
macro applications will fail.
The remainder of this file shall contain exportable structs as output by cynic querygen
.
Sending GraphQL queries
Now that both the schema and queries are cached as separate crates, an application can make use of these by using the query
crate to run queries.
/// Safely run queries using generated query objects,
/// which have been checked against the underlying schema.
/// We do not need to codegen again.
use query::*;
fn main() {
let result = run_query();
println!("{:#?}", result);
}
fn run_query() -> cynic::GraphQlResponse<PullRequestTitles> {
...
}