Skip to content

Commit

Permalink
updating edits
Browse files Browse the repository at this point in the history
  • Loading branch information
idalithb committed Nov 5, 2024
1 parent 0b1e690 commit 21cb3ff
Show file tree
Hide file tree
Showing 4 changed files with 128 additions and 18 deletions.
124 changes: 117 additions & 7 deletions website/pages/en/developing/creating-a-subgraph/advanced.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@
title: Advance Subgraph Features
---

## Experimental features
## Overview

Learn about and implement advanced subgraph features to enhanced your subgraph's built.

Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:

Expand All @@ -23,7 +25,115 @@ features:
dataSources: ...
```
Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
## Timeseries and Aggregations
Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc.
This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
### Example Schema
```graphql
type Data @entity(timeseries: true) {
id: Int8!
timestamp: Timestamp!
price: BigDecimal!
}

type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
id: Int8!
timestamp: Timestamp!
sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
}
```

### Defining Timeseries and Aggregations

Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities.

Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval.

#### Available Aggregation Intervals

- `hour`: sets the timeseries period every hour, on the hour.
- `day`: sets the timeseries period every day, starting and ending at 00:00.

#### Available Aggregation Functions

- `sum`: Total of all values.
- `count`: Number of values.
- `min`: Minimum value.
- `max`: Maximum value.
- `first`: First value in the period.
- `last`: Last value in the period.

#### Example Aggregations Query

```graphql
{
stats(interval: "hour", where: { timestamp_gt: 1704085200 }) {
id
timestamp
sum
}
}
```

Note:

To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility.

[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations.

## Non-fatal errors

Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.

> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest:

```yaml
specVersion: 0.0.4
description: Gravatar for Ethereum
features:
- nonFatalErrors
...
```

The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example:

```graphql
foos(first: 100, subgraphError: allow) {
id
}

_meta {
hasIndexingErrors
}
```

If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:

```graphql
"data": {
"foos": [
{
"id": "0xdead"
}
],
"_meta": {
"hasIndexingErrors": true
}
},
"errors": [
{
"message": "indexing_error"
}
]
```

## IPFS/Arweave File Data Sources

Expand Down Expand Up @@ -243,7 +353,7 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra

[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721)

### Indexed Argument Filters / Topic Filters
## Indexed Argument Filters / Topic Filters

> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`
Expand All @@ -253,7 +363,7 @@ Topic filters, also known as indexed argument filters, are a powerful feature in

- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.

#### How Topic Filters Work
### How Topic Filters Work

When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.

Expand Down Expand Up @@ -299,7 +409,7 @@ In this setup:
- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third.
- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic.

##### Filter Logic
#### Filter Logic

- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic.
- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler.
Expand Down Expand Up @@ -336,7 +446,7 @@ In this configuration:
- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver.
- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.

### Declared eth_call
## Declared eth_call

> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.

Expand All @@ -348,7 +458,7 @@ This feature does the following:
- Allows faster data fetching, resulting in quicker query responses and a better user experience.
- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.

#### Key Concepts
### Key Concepts

- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially.
- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Install the Graph CLI
---

> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers.
> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/network/curating/).
## Getting Started

Expand Down Expand Up @@ -96,15 +96,15 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is

> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`.
## Getting The ABIs
### Getting The ABIs

The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files:

- If you are building your own project, you will likely have access to your most current ABIs.
- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail.

### SpecVersion Releases
## SpecVersion Releases

| Version | Release notes |
| :-: | --- |
Expand Down
10 changes: 5 additions & 5 deletions website/pages/en/developing/creating-a-subgraph/ql-schema.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ The following scalars are supported in the GraphQL API:
| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. |

#### Enums
### Enums

You can also create enums within a schema. Enums have the following syntax:

Expand All @@ -99,7 +99,7 @@ Once the enum is defined in the schema, you can use the string representation of

More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/).

#### Entity Relationships
### Entity Relationships

An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship.

Expand Down Expand Up @@ -137,7 +137,7 @@ type TokenBalance @entity {
}
```

#### Reverse Lookups
### Reverse Lookups

Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived.

Expand Down Expand Up @@ -221,7 +221,7 @@ query usersWithOrganizations {

This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query.

#### Adding comments to the schema
### Adding comments to the schema

As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below:

Expand Down Expand Up @@ -277,7 +277,7 @@ query {

> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest.
### Languages supported
## Languages supported

Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary from language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token".

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ The **subgraph definition** consists of the following files:

- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide)

> Note: Please make sure you populate your subgraph manifest with both entities and add all handlers.
### Subgraph Capabilities

A single subgraph can:
Expand Down Expand Up @@ -81,6 +79,8 @@ dataSources:
## Subgraph Entries
> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/developing/creating-a-subgraph/ql-schema/).
The important entries to update for the manifest are:
- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
Expand Down Expand Up @@ -328,7 +328,7 @@ eventHandlers:

Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead.

### Order of Triggering Handlers
## Order of Triggering Handlers

The triggers for a data source within a block are ordered using the following process:

Expand Down

0 comments on commit 21cb3ff

Please sign in to comment.