如何解决Spring Data MongoDB不存在切换错误
我在春季使用MongoDB在我的一项服务后面运行了aggreagaton逻辑。
聚合逻辑如下:
MatchOperation dateMatchOperation = Aggregation.match(
Criteria.
where("date").
gte(new Date(startStamp)).lte(new Date(endStamp)));
MatchOperation propertyMatchOperation = Aggregation.match(
Criteria.
where("abc1").is(abcVal1)
.and("abc2").is(abcVal2)
.and("abc3").is(abcVal3)
.and("abc4").is(abcVal4)
.and("abc5").is(abcVal5)
);
List<Date> dates = new ArrayList<>();
//Create appropriate interval arraylist which will be passed to mongo
for (int i = 0; i < (endStamp - startStamp)/aggregationInterval; i ++) {
dates.add(new Date(startStamp + i * aggregationInterval));
}
BucketOperation bucketOperation = Aggregation.bucket("date").withBoundaries(dates.toArray())
.andOutput(AccumulatorOperators.Sum.sumOf(aggregationInput)).as("value")
.andOutput(AccumulatorOperators.Min.minOf("date")).as("from")
.andOutput(AccumulatorOperators.Max.maxOf("date")).as("to");
AggregationOptions aggregationOptions = AggregationOptions.builder().allowDiskUse(true).build();
AggregationResults<MetricAggregationResult> aggregationResults = mongoTemplate.aggregate(
Aggregation.newAggregation(dateMatchOperation,propertyMatchOperation,bucketOperation)
.withOptions(aggregationOptions),"mongocollectionname",MetricAggregationResult.class);
我正在用450万个文档对它进行测试。当aggregationInterval很小并且arraylist中有很多元素时,它可以正常工作,但是在某些时候,逐渐增加聚合间隔的时间我注意到,在某个点聚合之后会引发以下错误:
com.mongodb.MongoCommandException: Command failed with error 40066: '$switch could not find a matching branch for an input,and no default was specified.'
这很奇怪,因为我没有在逻辑中使用任何$ switch聚合,例如 AggregationOptions ,我以为mongo aggregation was hitting it's 100MB limit并且允许使用磁盘。
这时我束手无策,我不知道是什么引起了问题(我在StackOverflow的各处搜索$ switch错误,但是我找不到任何东西,因为所有询问的人都在代码中使用$ switch在某种程度上),但我相当有信心这是蒙哥队所错过的。
解决方法
Use BucketAutoOperation
BucketAutoOperation bucketAutoOperation = Aggregation.bucketAuto("date",2)
.andOutput(AccumulatorOperators.Sum.sumOf(aggregationInput)).as("value")
.andOutput(AccumulatorOperators.Min.minOf("date")).as("from")
.andOutput(AccumulatorOperators.Max.maxOf("date")).as("to");
,
To use a bucket operation,we specify the following:
groupBy: the field that the boundaries will apply to. This field must be numeric or a date field.
boundaries: an array of boundary points. Documents which have a groupBy field falling between two elements in the array go into that bucket. The between test here is half-open,so the first point is inclusive,second point is exclusive,which is the behavior developers would expect
default: any documents in the pipeline which don’t go into one of the buckets will go into default. This is required. Using a match operation in the pipeline before the bucket operation will remove documents which shouldn’t be processed.
output: an aggregation expression to generate the output document for each bucket
IN the above code,we missed specifying default.
final BucketOperation bucketOperation = Aggregation.bucket("date").
withBoundaries(now.minus(10,ChronoUnit.DAYS),now.minus(9,DAYS),now.minus(8,now.minus(7,now.minus(6,now.minus(5,now.minus(4,now.minus(3,now.minus(2,now.minus(1,now.minus(0,DAYS)).
withDefaultBucket("defaultBucket").
andOutput(countingExpression).as("count");
请参考以下链接: https://chiralsoftware.com/idea/spring-data-aggregation-operations
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。