Skip to content

linear重构#830

Open
lhx28 wants to merge 4 commits intoLeavesMC:masterfrom
lhx28:fix-region-threads
Open

linear重构#830
lhx28 wants to merge 4 commits intoLeavesMC:masterfrom
lhx28:fix-region-threads

Conversation

@lhx28
Copy link
Copy Markdown

@lhx28 lhx28 commented Mar 14, 2026

服务器使用的linear v2作为储存,没有打开虚拟线程(打开了以后会导致内存爆得更快)(JVM 虚拟线程调度器会强持有所有虚拟线程实例,内存永远回收不了)。由于每打开一个区块文件(.linear)都会拉起一个新的线程,导致服务器产生大量的linear io线程
图片
图片
所以把每个文件加载都启动一个线程改为了全局线程池

周末打扰您真的麻烦了

Copy link
Copy Markdown
Member

@Lumine1909 Lumine1909 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢你的pr!
鉴于Linear格式是一个极度危险的实验性功能 我不建议你继续使用它(
并且 这个文件的代码质量在我看来是leaves项目中最差的 应该整体重构(

Comment thread leaves-server/src/main/java/org/leavesmc/leaves/LeavesConfig.java
@lhx28 lhx28 requested a review from Lumine1909 March 14, 2026 05:11
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 14, 2026

改好了,麻烦您在看一下

Copy link
Copy Markdown
Member

@Lumine1909 Lumine1909 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

喵(

@Lumine1909 Lumine1909 self-requested a review March 15, 2026 03:02
Copy link
Copy Markdown
Member

@Lumine1909 Lumine1909 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不小心点成approve了(

@lhx28 lhx28 requested a review from Lumine1909 March 15, 2026 03:52
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 15, 2026

改好了,麻烦您再看一下。实在是抱歉打扰了。喵(qwq)

Copy link
Copy Markdown
Member

@Lumine1909 Lumine1909 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

其它的应该没什么大问题了(

@Lumine1909 Lumine1909 requested a review from s-yh-china March 15, 2026 04:13
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 15, 2026

改完了,麻烦您再看一眼QAQ

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 15, 2026

还有感觉那个public boolean regionFileOpen = false;是不是要改成public volatile boolean regionFileOpen = false;来解决一下多线程的可见性来着(刚想起来)

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 15, 2026

喵,其实还有想要优化的地方,不知道是否是应该打开新的pr还是在这里修改
1.给所有的synchronized换成读写锁,允许多线程读,单线程写
2.给文件的byte[] fileContent = Files.readAllBytes(this.regionFile);改成分段读取

byte[] fileContent;
try (InputStream in = Files.newInputStream(this.regionFile); ByteArrayOutputStream out = new ByteArrayOutputStream()) {
byte[] buffer = new byte[4096 << 10]; // 4MB缓冲区
int len;
while ((len = in.read(buffer)) != -1) out.write(buffer, 0, len);
fileContent = out.toByteArray();
}减少内存呢?qwq

@s-yh-china
Copy link
Copy Markdown
Member

想做可以一起做了然后把pr改名城linear重构 大概是这样的 顺便可以转为 draft pr

@Lumine1909
Copy link
Copy Markdown
Member

记得看一下 lhx28#1 qwq

@lhx28 lhx28 changed the title 解决linear线程会根据服务器打开时间无限增长的问题 linear重构 Mar 15, 2026
@lhx28 lhx28 marked this pull request as draft March 15, 2026 06:44
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 15, 2026

补充测试说明qwq:
1.由于1.21.8 和 1.21.10 版本代码完全一致,所以使用同样的代码编译了双版本jar包(因为1.21.10不是在官网还没出嘛qwq)。
2.在测试服务器环境中(java 24 + mc1.21.8以及mc1.21.10共四次测试(双版本 + 虚拟线程开启 / 关闭)),分别测试了配置文件
region:
format: LINEAR
linear:
max-flush-per-run: 16
version: V2
flush-delay-ms: 500
use-virtual-thread: false
flush-max-threads: 8
compression-level: 8
以及配置文件
region:
format: LINEAR
linear:
max-flush-per-run: 16
version: V2
flush-delay-ms: 500
use-virtual-thread: true
flush-max-threads: 8
compression-level: 8
经过tp传送卸载区块/stop停服等测试后,两个版本、两种配置均无任何异常,且与旧版存档互通(毕竟没改压缩格式qwq)测试服务器中明显感觉速度有所加快(与其中没有修改读写锁相比)接近不压缩速度了(测试服务器性能较好一点)

3.在正式graalvm25+1.21.8服务器环境中(包含slimefun等将近110个插件),截至现在已稳定运行3h,玩家暂时无感(服务器因区块压缩存在玩家进出服务器 CPU 波动明显,但是由于后台压缩所以不影响 TPS,可通过配置文件调整优化、正式服内存 GC 次数偏高,可通过修改启动参数解决。)
正式服务器目前配置文件:
region:
format: LINEAR
linear:
version: V2
flush-delay-ms: 300
max-flush-per-run: 8
use-virtual-thread: false
flush-max-threads: 8
compression-level: 8
image
image
image

@lhx28 lhx28 marked this pull request as ready for review March 15, 2026 11:26
@lhx28 lhx28 requested a review from Lumine1909 March 15, 2026 11:32
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 15, 2026

麻烦再看一下,打扰了qwq

Copy link
Copy Markdown
Member

@Lumine1909 Lumine1909 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

剩下的没什么了 记得点下reformat (

@lhx28 lhx28 requested a review from Lumine1909 March 17, 2026 07:15
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 17, 2026

已按要求全部修改完啦,麻烦再看下qwq

Lumine1909
Lumine1909 previously approved these changes Mar 19, 2026
Copy link
Copy Markdown
Member

@Lumine1909 Lumine1909 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 20, 2026

qwq,刚才发现按照建议给 close 加 synchronized 去掉读写锁的话可能会出现队列中最后修改丢失的问题
我先把 PR 切回 draft 修复一下这个锁逻辑,改完再切回来麻烦您看下

@lhx28 lhx28 marked this pull request as draft March 20, 2026 06:46
@lhx28 lhx28 marked this pull request as ready for review March 20, 2026 09:25
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 20, 2026

已经把 close 的锁逻辑修复好啦,换回安全的版本了qwq,麻烦再看下~

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 21, 2026

先等会

@lhx28 lhx28 marked this pull request as draft March 21, 2026 17:47
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 21, 2026

实在是抱歉,那个其实应该还有些遗留问题(linear的(因为之前重构只有那个线程上的,其实在观察后发现还有内存溢出,硬盘写入过量的数据导致的问题))(很复杂的)需要修改,我建议是在我的那个测试服务器上跑上一天再提交

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 21, 2026

实在是麻烦您了

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 21, 2026

很抱歉在这个时候打断您

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 21, 2026

那个,测试的结果大概明天中午15:00出,到时候再说吧(如果您方便的话)实在是抱歉了

@s-yh-china
Copy link
Copy Markdown
Member

没事,您先测,我暂时没有看出问题。
唯一的问题是不建议您把一段话发这么多分段,每次发送评论所有此项目订阅者邮箱都会收到一封邮件。

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 22, 2026

非常抱歉之前把话拆成好多条打扰到您qwq!
本次后续修改已全部完成,应该不会再改了~
经过正式服务器13h运行验证,应该解决了内存溢出、磁盘写入过量的问题(图上内存大主要是人数增长的原因),服务端运行稳定,区块刷盘与内存卸载逻辑均正常,没有报错。
mmexport1774160916760.jpg

mmexport1774160920417.jpg

mmexport1774160918737.jpg

mmexport1774160923447.jpg

我现在退出Draft状态,麻烦您再帮忙审核一下啦!

@lhx28 lhx28 marked this pull request as ready for review March 22, 2026 06:29
MC-XiaoHei
MC-XiaoHei previously approved these changes Mar 23, 2026
Comment thread leaves-server/src/main/java/org/leavesmc/leaves/LeavesConfig.java Outdated
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 23, 2026

好哒qwq!那我统一改 ms 单位,谢谢您的建议~

s-yh-china
s-yh-china previously approved these changes Mar 29, 2026
Copy link
Copy Markdown
Member

@s-yh-china s-yh-china left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

很抱歉隔着一周才重新进行审查,我认为您的修改应该已经没有问题,这项工作是值得肯定的,剩下的只剩一些细微的格式化工作,您可以在 idea 中直接对 LinearRegionFile 进行一次格式化操作,其应该有少量更改内容。
真正值得注意问题是,您的部分提交似乎缺少了签名,这可能是github网页版的漏洞,但这仍然会使得我们无法对您的pr进行合并。如果可以,您可以尝试在本地对此pr的相关提交记录压缩后一并重新签名,而后只需要进行一次强制提交就可以了。

@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 30, 2026

抱歉Lumine1909,您的合并记录好像被压到一起了QAQ,我真的不怎么会用github
所有提交都压缩合并完成了,麻烦您看一下~

@Lumine1909
Copy link
Copy Markdown
Member

抱歉Lumine1909,您的合并记录好像被压到一起了QAQ,我真的不怎么会用github 所有提交都压缩合并完成了,麻烦您看一下~

还是没有签名(

@lhx28 lhx28 force-pushed the fix-region-threads branch from 8fe385d to 3f8ce07 Compare March 30, 2026 11:06
@lhx28
Copy link
Copy Markdown
Author

lhx28 commented Mar 30, 2026

非常抱歉,麻烦您看下~QAQ

@lhx28 lhx28 marked this pull request as draft April 5, 2026 10:18
@lhx28 lhx28 marked this pull request as ready for review April 5, 2026 22:41
@lhx28 lhx28 marked this pull request as draft April 8, 2026 00:04
@lhx28 lhx28 marked this pull request as ready for review April 8, 2026 04:04
@lhx28 lhx28 marked this pull request as draft April 9, 2026 00:05
@lhx28 lhx28 marked this pull request as ready for review April 9, 2026 00:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants