Spring Data MongoDB不存在切换错误

如何解决Spring Data MongoDB不存在切换错误

我在春季使用MongoDB在我的一项服务后面运行了aggreagaton逻辑。

聚合逻辑如下:

MatchOperation dateMatchOperation = Aggregation.match(
    Criteria.
            where("date").
            gte(new Date(startStamp)).lte(new Date(endStamp)));

MatchOperation propertyMatchOperation = Aggregation.match(
    Criteria.
            where("abc1").is(abcVal1)
            .and("abc2").is(abcVal2)
            .and("abc3").is(abcVal3)
            .and("abc4").is(abcVal4)
            .and("abc5").is(abcVal5)
);

List<Date> dates = new ArrayList<>();
//Create appropriate interval arraylist which will be passed to mongo
for (int i = 0; i < (endStamp - startStamp)/aggregationInterval; i ++) {
dates.add(new Date(startStamp + i * aggregationInterval));
}


BucketOperation bucketOperation = Aggregation.bucket("date").withBoundaries(dates.toArray())
    .andOutput(AccumulatorOperators.Sum.sumOf(aggregationInput)).as("value")
    .andOutput(AccumulatorOperators.Min.minOf("date")).as("from")
    .andOutput(AccumulatorOperators.Max.maxOf("date")).as("to");


AggregationOptions aggregationOptions = AggregationOptions.builder().allowDiskUse(true).build();
AggregationResults<MetricAggregationResult> aggregationResults = mongoTemplate.aggregate(
    Aggregation.newAggregation(dateMatchOperation,propertyMatchOperation,bucketOperation)
        .withOptions(aggregationOptions),"mongocollectionname",MetricAggregationResult.class);

我正在用450万个文档对它进行测试。当aggregationInterval很小并且arraylist中有很多元素时,它可以正常工作,但是在某些时候,逐渐增加聚合间隔的时间我注意到,在某个点聚合之后会引发以下错误:

com.mongodb.MongoCommandException: Command failed with error 40066: '$switch could not find a matching branch for an input,and no default was specified.'

这很奇怪,因为我没有在逻辑中使用任何$ switch聚合,例如 AggregationOptions ,我以为mongo aggregation was hitting it's 100MB limit并且允许使用磁盘。

这时我束手无策,我不知道是什么引起了问题(我在StackOverflow的各处搜索$ switch错误,但是我找不到任何东西,因为所有询问的人都在代码中使用$ switch在某种程度上),但我相当有信心这是蒙哥队所错过的。

解决方法

Use BucketAutoOperation

 BucketAutoOperation bucketAutoOperation = Aggregation.bucketAuto("date",2)
                .andOutput(AccumulatorOperators.Sum.sumOf(aggregationInput)).as("value")
                .andOutput(AccumulatorOperators.Min.minOf("date")).as("from")
                .andOutput(AccumulatorOperators.Max.maxOf("date")).as("to");
,
To use a bucket operation,we specify the following:

groupBy: the field that the boundaries will apply to. This field must be numeric or a date field.
boundaries: an array of boundary points. Documents which have a groupBy field falling between two elements in the array go into that bucket. The between test here is half-open,so the first point is inclusive,second point is exclusive,which is the behavior developers would expect
default: any documents in the pipeline which don’t go into one of the buckets will go into default. This is required. Using a match operation in the pipeline before the bucket operation will remove documents which shouldn’t be processed.
output: an aggregation expression to generate the output document for each bucket


IN the above code,we missed specifying default. 

final BucketOperation bucketOperation = Aggregation.bucket("date").
                withBoundaries(now.minus(10,ChronoUnit.DAYS),now.minus(9,DAYS),now.minus(8,now.minus(7,now.minus(6,now.minus(5,now.minus(4,now.minus(3,now.minus(2,now.minus(1,now.minus(0,DAYS)).
                withDefaultBucket("defaultBucket").
                andOutput(countingExpression).as("count");

请参考以下链接: https://chiralsoftware.com/idea/spring-data-aggregation-operations

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res