Griffin DSL is designed for DQ measurement, as a SQL-like language, trying to describe the DQ domain request.
Griffin DSL is SQL-like, case insensitive, and easy to learn.
null, nan, true, false
not, and, or
in, between, like, is
select, from, as, where, group, by, having, order, desc, asc, limit
!, &&, ||, =, !=, <, >, <=, >=, <>
+, -, *, /, %
(, )
., [, ]
"test"
, 'string 1'
, "hello \" world \" "
123
, 33.5
3d
, 5h
, 4ms
true
, false
source
, target
, `my table name`
*
, source.*
, target.*
source.age
, target.name
, user_id
source.attributes[3]
count(*)
, *.count()
, source.user_id.count()
, max(source.age)
source.user_id as id
, target.user_name as name
123
, max(1, 2, 3, 4)
, source.age
, (source.age + 13)
-(100 - source.score)
source.age + 13
, score * 2 + ratio
source.country in ("USA", "CHN", "RSA")
source.age between 3 and 30
, source.age between (3, 30)
source.name like "%abc%"
source.desc is not null
is null
source.age is not nan
(source.user_id = target.user_id AND source.age > target.age)
NOT source.has_data
, !(source.age = target.age)
and
, or
and comparison operators.source.age = target.age OR source.ticket = target.tck
max(source.age, target.age)
, count(*)
select user_id.count(), age.max() as max
, source.user_id.count() as cnt, source.age.min()
from source
, from `target`
where source.id = target.id and source.age = target.age
group by cntry
, group by gender having count(*) > 50
order by name
, order by first_name desc, age asc
limit 5
Accuracy rule expression in Griffin DSL is a logical expression, telling the mapping relation between data sources.
e.g. source.id = target.id and source.name = target.name and source.age between (target.age, target.age + 5)
Profiling rule expression in Griffin DSL is a sql-like expression, with select clause ahead, following optional from clause, where clause, group-by clause, order-by clause, limit clause in order.
e.g. source.gender, source.id.count() where source.age > 20 group by source.gender
, select country, max(age), min(age), count(*) as cnt from source group by country order by cnt desc limit 5
Griffin DSL is defined for DQ measurement, to describe DQ domain problem.
Actually, in Griffin, we get Griffin DSL rules, translate them into spark-sql rules for calculation in spark-sql engine.
In DQ domain, there're multiple dimensions, we need to translate them in different ways.
For accuracy, we need to get the match count between source and target, the rule describes the mapping relation between data sources. Griffin needs to translate the dsl rule into multiple sql rules.
For example, the dsl rule is source.id = target.id and source.name = target.name
, which represents the match condition of accuracy. After the translation, the sql rules are as below:
SELECT source.* FROM source LEFT JOIN target ON coalesce(source.id, '') = coalesce(target.id, '') and coalesce(source.name, '') = coalesce(target.name, '') WHERE (NOT (source.id IS NULL AND source.name IS NULL)) AND (target.id IS NULL AND target.name IS NULL)
, save as table miss_items
.SELECT COUNT(*) AS miss FROM miss_items
, save as table miss_count
.SELECT COUNT(*) AS total FROM source
, save as table total_count
.SELECT miss_count.miss AS miss, total_count.total AS total, (total_count.total - miss_count.miss) AS matched FROM miss_count FULL JOIN total_count
, save as table accuracy
.After the translation, the metrics will be persisted in table accuracy
.
For profiling, the request is always the aggregation function of data, the rule is mainly the same as sql, but only supporting select
, from
, where
, group-by
, having
, order-by
, limit
clauses, which can describe most of the profiling requests. If any complicate request, you can use sql rule directly to describe it.
For example, the dsl rule is source.cntry, source.id.count(), source.age.max() group by source.cntry
, which represents the profiling requests. After the translation, the sql rule is as below:
SELECT source.cntry, count(source.id), max(source.age) FROM source GROUP BY source.cntry
, save as table profiling
.After the translation, the metrics will be persisted in table profiling
.
You can simply use Griffin DSL rule to describe your problem in DQ domain, for some complicate requirement, you can also use some alternative rules supported by Griffin.
Griffin supports spark-sql directly, you can write rule in sql like this:
{ "dsl.type": "spark-sql", "name": "source", "rule": "SELECT count(id) AS cnt, max(timestamp) AS fresh_time FROM source" }
Griffin will calculate it in spark-sql engine directly.
Griffin supports some other operations on data frame in spark, like converting json string data frame into extracted data frame with extracted object schema. For example:
{ "dsl.type": "df-opr", "name": "ext_source", "rule": "from_json", "details": { "df.name": "json_source" } }
Griffin will do the operation to extract json strings.
Actually, you can also extend the df-opr engine and df-opr adaptor in Griffin to support more types of data frame operations.
Griffin engine runs on spark, it might works in two phases, pre-proc phase and run phase.