diff --git a/CN/modules/ROOT/images/media/image10.png b/CN/modules/ROOT/images/media/image10.png new file mode 100644 index 0000000..7c3334d Binary files /dev/null and b/CN/modules/ROOT/images/media/image10.png differ diff --git a/CN/modules/ROOT/images/media/image11.png b/CN/modules/ROOT/images/media/image11.png new file mode 100644 index 0000000..56ffaab Binary files /dev/null and b/CN/modules/ROOT/images/media/image11.png differ diff --git a/CN/modules/ROOT/images/media/image12.png b/CN/modules/ROOT/images/media/image12.png new file mode 100644 index 0000000..4c6c784 Binary files /dev/null and b/CN/modules/ROOT/images/media/image12.png differ diff --git a/CN/modules/ROOT/images/media/image13.png b/CN/modules/ROOT/images/media/image13.png new file mode 100644 index 0000000..8ff015f Binary files /dev/null and b/CN/modules/ROOT/images/media/image13.png differ diff --git a/CN/modules/ROOT/images/media/image14.png b/CN/modules/ROOT/images/media/image14.png new file mode 100644 index 0000000..0f8ea22 Binary files /dev/null and b/CN/modules/ROOT/images/media/image14.png differ diff --git a/CN/modules/ROOT/images/media/image15.png b/CN/modules/ROOT/images/media/image15.png new file mode 100644 index 0000000..11ae1ce Binary files /dev/null and b/CN/modules/ROOT/images/media/image15.png differ diff --git a/CN/modules/ROOT/images/media/image16.png b/CN/modules/ROOT/images/media/image16.png new file mode 100644 index 0000000..a78afba Binary files /dev/null and b/CN/modules/ROOT/images/media/image16.png differ diff --git a/CN/modules/ROOT/images/media/image17.png b/CN/modules/ROOT/images/media/image17.png new file mode 100644 index 0000000..41f4178 Binary files /dev/null and b/CN/modules/ROOT/images/media/image17.png differ diff --git a/CN/modules/ROOT/images/media/image18.png b/CN/modules/ROOT/images/media/image18.png new file mode 100644 index 0000000..5eab6cd Binary files /dev/null and b/CN/modules/ROOT/images/media/image18.png differ diff --git a/CN/modules/ROOT/images/media/image19.png b/CN/modules/ROOT/images/media/image19.png new file mode 100644 index 0000000..4c05fa0 Binary files /dev/null and b/CN/modules/ROOT/images/media/image19.png differ diff --git a/CN/modules/ROOT/images/media/image20.png b/CN/modules/ROOT/images/media/image20.png new file mode 100644 index 0000000..8818128 Binary files /dev/null and b/CN/modules/ROOT/images/media/image20.png differ diff --git a/CN/modules/ROOT/images/media/image21.png b/CN/modules/ROOT/images/media/image21.png new file mode 100644 index 0000000..51d8c60 Binary files /dev/null and b/CN/modules/ROOT/images/media/image21.png differ diff --git a/CN/modules/ROOT/images/media/image22.png b/CN/modules/ROOT/images/media/image22.png new file mode 100644 index 0000000..41a6f74 Binary files /dev/null and b/CN/modules/ROOT/images/media/image22.png differ diff --git a/CN/modules/ROOT/images/media/image23.png b/CN/modules/ROOT/images/media/image23.png new file mode 100644 index 0000000..68444c1 Binary files /dev/null and b/CN/modules/ROOT/images/media/image23.png differ diff --git a/CN/modules/ROOT/images/media/image24.png b/CN/modules/ROOT/images/media/image24.png new file mode 100644 index 0000000..c63ef9b Binary files /dev/null and b/CN/modules/ROOT/images/media/image24.png differ diff --git a/CN/modules/ROOT/images/media/image25.png b/CN/modules/ROOT/images/media/image25.png new file mode 100644 index 0000000..7427fc7 Binary files /dev/null and b/CN/modules/ROOT/images/media/image25.png differ diff --git a/CN/modules/ROOT/images/media/image26.png b/CN/modules/ROOT/images/media/image26.png new file mode 100644 index 0000000..61e1007 Binary files /dev/null and b/CN/modules/ROOT/images/media/image26.png differ diff --git a/CN/modules/ROOT/images/media/image27.png b/CN/modules/ROOT/images/media/image27.png new file mode 100644 index 0000000..5dfa6fa Binary files /dev/null and b/CN/modules/ROOT/images/media/image27.png differ diff --git a/CN/modules/ROOT/images/media/image28.png b/CN/modules/ROOT/images/media/image28.png new file mode 100644 index 0000000..aa5fd09 Binary files /dev/null and b/CN/modules/ROOT/images/media/image28.png differ diff --git a/CN/modules/ROOT/images/media/image29.png b/CN/modules/ROOT/images/media/image29.png new file mode 100644 index 0000000..4e329ef Binary files /dev/null and b/CN/modules/ROOT/images/media/image29.png differ diff --git a/CN/modules/ROOT/images/media/image3.png b/CN/modules/ROOT/images/media/image3.png new file mode 100644 index 0000000..62902e6 Binary files /dev/null and b/CN/modules/ROOT/images/media/image3.png differ diff --git a/CN/modules/ROOT/images/media/image30.png b/CN/modules/ROOT/images/media/image30.png new file mode 100644 index 0000000..c164111 Binary files /dev/null and b/CN/modules/ROOT/images/media/image30.png differ diff --git a/CN/modules/ROOT/images/media/image31.png b/CN/modules/ROOT/images/media/image31.png new file mode 100644 index 0000000..bd660a8 Binary files /dev/null and b/CN/modules/ROOT/images/media/image31.png differ diff --git a/CN/modules/ROOT/images/media/image32.png b/CN/modules/ROOT/images/media/image32.png new file mode 100644 index 0000000..510d7dc Binary files /dev/null and b/CN/modules/ROOT/images/media/image32.png differ diff --git a/CN/modules/ROOT/images/media/image33.png b/CN/modules/ROOT/images/media/image33.png new file mode 100644 index 0000000..37352d1 Binary files /dev/null and b/CN/modules/ROOT/images/media/image33.png differ diff --git a/CN/modules/ROOT/images/media/image34.png b/CN/modules/ROOT/images/media/image34.png new file mode 100644 index 0000000..f8dabee Binary files /dev/null and b/CN/modules/ROOT/images/media/image34.png differ diff --git a/CN/modules/ROOT/images/media/image35.png b/CN/modules/ROOT/images/media/image35.png new file mode 100644 index 0000000..b2c7f4a Binary files /dev/null and b/CN/modules/ROOT/images/media/image35.png differ diff --git a/CN/modules/ROOT/images/media/image36.jpeg b/CN/modules/ROOT/images/media/image36.jpeg new file mode 100644 index 0000000..4b2e013 Binary files /dev/null and b/CN/modules/ROOT/images/media/image36.jpeg differ diff --git a/CN/modules/ROOT/images/media/image37.png b/CN/modules/ROOT/images/media/image37.png new file mode 100644 index 0000000..1c7a8a0 Binary files /dev/null and b/CN/modules/ROOT/images/media/image37.png differ diff --git a/CN/modules/ROOT/images/media/image38.jpeg b/CN/modules/ROOT/images/media/image38.jpeg new file mode 100644 index 0000000..a376a08 Binary files /dev/null and b/CN/modules/ROOT/images/media/image38.jpeg differ diff --git a/CN/modules/ROOT/images/media/image39.png b/CN/modules/ROOT/images/media/image39.png new file mode 100644 index 0000000..6c209b2 Binary files /dev/null and b/CN/modules/ROOT/images/media/image39.png differ diff --git a/CN/modules/ROOT/images/media/image4.png b/CN/modules/ROOT/images/media/image4.png new file mode 100644 index 0000000..545b07c Binary files /dev/null and b/CN/modules/ROOT/images/media/image4.png differ diff --git a/CN/modules/ROOT/images/media/image40.png b/CN/modules/ROOT/images/media/image40.png new file mode 100644 index 0000000..ca5858c Binary files /dev/null and b/CN/modules/ROOT/images/media/image40.png differ diff --git a/CN/modules/ROOT/images/media/image41.png b/CN/modules/ROOT/images/media/image41.png new file mode 100644 index 0000000..ebabdbb Binary files /dev/null and b/CN/modules/ROOT/images/media/image41.png differ diff --git a/CN/modules/ROOT/images/media/image42.png b/CN/modules/ROOT/images/media/image42.png new file mode 100644 index 0000000..4e1a44f Binary files /dev/null and b/CN/modules/ROOT/images/media/image42.png differ diff --git a/CN/modules/ROOT/images/media/image43.png b/CN/modules/ROOT/images/media/image43.png new file mode 100644 index 0000000..aebc64d Binary files /dev/null and b/CN/modules/ROOT/images/media/image43.png differ diff --git a/CN/modules/ROOT/images/media/image44.png b/CN/modules/ROOT/images/media/image44.png new file mode 100644 index 0000000..80d2d14 Binary files /dev/null and b/CN/modules/ROOT/images/media/image44.png differ diff --git a/CN/modules/ROOT/images/media/image45.jpeg b/CN/modules/ROOT/images/media/image45.jpeg new file mode 100644 index 0000000..53bd76d Binary files /dev/null and b/CN/modules/ROOT/images/media/image45.jpeg differ diff --git a/CN/modules/ROOT/images/media/image46.png b/CN/modules/ROOT/images/media/image46.png new file mode 100644 index 0000000..8c990ed Binary files /dev/null and b/CN/modules/ROOT/images/media/image46.png differ diff --git a/CN/modules/ROOT/images/media/image47.png b/CN/modules/ROOT/images/media/image47.png new file mode 100644 index 0000000..4b3354c Binary files /dev/null and b/CN/modules/ROOT/images/media/image47.png differ diff --git a/CN/modules/ROOT/images/media/image5.png b/CN/modules/ROOT/images/media/image5.png new file mode 100644 index 0000000..6aaf302 Binary files /dev/null and b/CN/modules/ROOT/images/media/image5.png differ diff --git a/CN/modules/ROOT/images/media/image6.png b/CN/modules/ROOT/images/media/image6.png new file mode 100644 index 0000000..7f11ad1 Binary files /dev/null and b/CN/modules/ROOT/images/media/image6.png differ diff --git a/CN/modules/ROOT/images/media/image7.png b/CN/modules/ROOT/images/media/image7.png new file mode 100644 index 0000000..10641ea Binary files /dev/null and b/CN/modules/ROOT/images/media/image7.png differ diff --git a/CN/modules/ROOT/images/media/image8.png b/CN/modules/ROOT/images/media/image8.png new file mode 100644 index 0000000..0038af2 Binary files /dev/null and b/CN/modules/ROOT/images/media/image8.png differ diff --git a/CN/modules/ROOT/images/media/image9.png b/CN/modules/ROOT/images/media/image9.png new file mode 100644 index 0000000..b779b1e Binary files /dev/null and b/CN/modules/ROOT/images/media/image9.png differ diff --git a/CN/modules/ROOT/nav.adoc b/CN/modules/ROOT/nav.adoc index a7b55a8..deb089a 100644 --- a/CN/modules/ROOT/nav.adoc +++ b/CN/modules/ROOT/nav.adoc @@ -9,9 +9,17 @@ ** IvorySQL高级 *** xref:master/4.1.adoc[安装指南] *** xref:master/4.2.adoc[集群搭建] +*** xref:master/4.5.adoc[迁移指南] *** xref:master/4.3.adoc[开发者指南] +*** 容器化指南 +**** xref:master/4.6.1.adoc[K8S部署] +**** xref:master/4.6.2.adoc[Operator部署] +**** xref:master/4.6.4.adoc[Docker & Podman部署] +**** xref:master/4.6.3.adoc[Docker Swarm & Docker Compose部署] *** xref:master/4.4.adoc[运维管理指南] -*** xref:master/4.5.adoc[迁移指南] +*** 云服务平台指南 +**** xref:master/4.7.1.adoc[IvorySQL Cloud安装] +**** xref:master/4.7.2.adoc[IvorySQL Cloud使用] ** IvorySQL生态 *** xref:master/cpu_arch_adp.adoc[芯片架构适配] *** xref:master/os_arch_adp.adoc[操作系统适配] @@ -31,6 +39,9 @@ *** 查询处理 **** xref:master/6.1.1.adoc[双parser] *** 兼容框架 +**** xref:master/7.1.adoc[框架设计] +**** xref:master/7.2.adoc[GUC框架] +**** xref:master/7.4.adoc[双模式设计] **** xref:master/6.2.1.adoc[initdb过程] *** 兼容特性 **** xref:master/6.3.1.adoc[like] @@ -50,28 +61,25 @@ **** xref:master/6.4.2.adoc[userenv] *** xref:master/6.5.adoc[国标GB18030] ** Oracle兼容功能列表 -*** xref:master/7.1.adoc[1、框架设计] -*** xref:master/7.2.adoc[2、GUC框架] -*** xref:master/7.3.adoc[3、大小写转换] -*** xref:master/7.4.adoc[4、双模式设计] -*** xref:master/7.5.adoc[5、兼容Oracle like] -*** xref:master/7.6.adoc[6、兼容Oracle匿名块] -*** xref:master/7.7.adoc[7、兼容Oracle函数与存储过程] -*** xref:master/7.8.adoc[8、内置数据类型与内置函数] -*** xref:master/7.9.adoc[9、新增Oracle兼容模式的端口与IP] -*** xref:master/7.10.adoc[10、XML函数] -*** xref:master/7.11.adoc[11、兼容Oracle sequence] -*** xref:master/7.12.adoc[12、包] -*** xref:master/7.13.adoc[13、不可见列] -*** xref:master/7.14.adoc[14、RowID] -*** xref:master/7.15.adoc[15、OUT 参数] -*** xref:master/7.16.adoc[16、%TYPE、%ROWTYPE] -*** xref:master/7.17.adoc[17、NLS 参数] -*** xref:master/7.18.adoc[18、Force View] -*** xref:master/7.19.adoc[19、嵌套子函数] -*** xref:master/7.20.adoc[20、sys_guid 函数] -*** xref:master/7.21.adoc[21、空字符串转null] -*** xref:master/7.22.adoc[22、CALL INTO] +*** xref:master/7.3.adoc[1、大小写转换] +*** xref:master/7.5.adoc[2、LIKE操作符] +*** xref:master/7.6.adoc[3、匿名块] +*** xref:master/7.7.adoc[4、函数与存储过程] +*** xref:master/7.8.adoc[5、内置数据类型与内置函数] +*** xref:master/7.9.adoc[6、端口与IP] +*** xref:master/7.10.adoc[7、XML函数] +*** xref:master/7.11.adoc[8、sequence] +*** xref:master/7.12.adoc[9、包] +*** xref:master/7.13.adoc[10、不可见列] +*** xref:master/7.14.adoc[11、RowID] +*** xref:master/7.15.adoc[12、OUT 参数] +*** xref:master/7.16.adoc[13、%TYPE、%ROWTYPE] +*** xref:master/7.17.adoc[14、NLS 参数] +*** xref:master/7.18.adoc[15、Force View] +*** xref:master/7.19.adoc[16、嵌套子函数] +*** xref:master/7.20.adoc[17、sys_guid 函数] +*** xref:master/7.21.adoc[18、空字符串转null] +*** xref:master/7.22.adoc[19、CALL INTO] ** IvorySQL贡献指南 *** xref:master/8.1.adoc[社区贡献指南] *** xref:master/8.2.adoc[asciidoc语法快速参考] diff --git a/CN/modules/ROOT/pages/master/1.adoc b/CN/modules/ROOT/pages/master/1.adoc index e3bc995..4ef31d6 100644 --- a/CN/modules/ROOT/pages/master/1.adoc +++ b/CN/modules/ROOT/pages/master/1.adoc @@ -4,78 +4,201 @@ == 版本概览 -[**发行日期:2025年06月04日**] +[*发布日期:2025 年 11 月 25 日*] +IvorySQL 5.0 基于 PostgreSQL 18.0,带来更强的 Oracle 兼容能力、PL/iSQL 增强以及全新的全球化特性,同时对打包、自动化和工具链进行全面更新。 +有关完整更新列表,请访问我们的 https://docs.ivorysql.org/[文档站点]。 -IvorySQL 4.5,基于PostgreSQL 17.5,并修复了多个问题。有关更新的完整列表,请访问我们的 https://docs.ivorysql.org/[文档网站] 。 +== 增强内容 -== 增强功能及问题修复 +- PostgreSQL 18.0 -- PostgreSQL 17.5 +1. 新增异步 I/O(AIO)子系统,可提升顺序扫描、位图堆扫描、vacuum 等操作的性能。 +2. pg_upgrade 现在会保留优化器统计信息。 +3. 支持 "skip scan" 查找,使多列 B-tree 索引能够在更多场景下使用。 +4. 提供用于生成按时间排序 UUID 的 uuidv7() 函数。 +5. 支持虚拟生成列(在读取时计算值),并将其设为生成列的默认模式。 +6. 增加 OAuth 认证能力。 +7. 在 INSERT、UPDATE、DELETE 和 MERGE 的 RETURNING 子句中支持 OLD 和 NEW。 +8. 对 PRIMARY KEY、UNIQUE 与 FOREIGN KEY 引入时间区间约束。 -1. 修复了在检查声明为 GB18030 编码的无效字符串时,可能发生的一字节缓冲区超读(one-byte buffer overread)问题,增强了系统处理无效编码数据的稳健性。 -2. 确保对分区表上存在的自引用外键(self-referential foreign keys)进行正确处理,提升了复杂数据结构下分区表的可靠性。 -3. 避免了在 brin_bloom_union() 函数中合并已压缩的 BRIN 摘要(summaries)时,可能发生的数据丢失风险,保障了索引数据的准确性。 -4. 修正了在嵌套 WITH 子句中的 INSERT/UPDATE/DELETE/MERGE 命令所附带的 WITH 子句内,对外部公共表表达式(CTE)名称引用时的处理逻辑,确保了复杂查询的正确执行。 -5. 修复了 ALTER TABLE ADD COLUMN 命令,以确保在添加列时,能够正确处理包含默认值的域(domain)类型,提高了表结构变更操作的准确性 +更多细节请参阅 https://www.postgresql.org/docs/release/18.0/[PostgreSQL 18.0 发布说明]。 -+ +== 新特性 +=== 新增21 项 Oracle 兼容能力 -更多细节, 请参阅 https://www.postgresql.org/docs/release/17.5/[PostgreSQL发布说明]. +- Oracle 兼容 ROWID:Feature https://github.com/IvorySQL/IvorySQL/issues/126[#126] + + 让 IvorySQL 行标识符与 Oracle 语义保持一致,便于跨数据库工具协同。 -- IvorySQL 4.5 +- PL/iSQL CALL 调用语法:Feature https://github.com/IvorySQL/IvorySQL/issues/764[#764] + + 新增 Oracle 风格的 `CALL` 入口,实现存储过程一致的调用体验。 -1. MIPS 全平台打包支持:特性 https://github.com/IvorySQL/IvorySQL/issues/736[#736] -+ -为 MIPS 架构提供多平台介质包,支持国内外主流操作系统,包括 Red Hat、Debian、麒麟、UOS、凝思等。 +- PL/iSQL `%ROWTYPE`:Feature https://github.com/IvorySQL/IvorySQL/issues/765[#765] + + 允许变量复用整张表或游标行的结构,便于紧凑编写 PL/iSQL。 -2. 新增IvorySQL 在线体验平台:特性 https://github.com/IvorySQL/ivorysql-wasm/issues/1[#1] -+ -提供一个基于 Web 的平台,用户可直接通过浏览器界面在线体验 IvorySQL V4.5 并进行数据库交互。 +- PL/iSQL `%TYPE`:Feature https://github.com/IvorySQL/IvorySQL/issues/766[#766] + + 支持变量继承既有列或变量的类型,降低类型漂移风险。 -3. 新增社区行为准则:特性 https://github.com/IvorySQL/IvorySQL/issues/808[#808] -+ -为社区参与者明确了行为规范和期望,旨在营造一个友好且互相尊重的社区环境。 +- 区分大小写兼容开关:Feature https://github.com/IvorySQL/IvorySQL/issues/767[#767] + + 在需要时可保留标识符大小写,以匹配 Oracle 行为。 -4. 更新社区贡献指南:特性 https://github.com/IvorySQL/ivorysql_docs/pull/121[#121] -+ -对社区贡献流程、规范和最佳实践进行了修订与完善,方便贡献者参与。 +- NLS 参数兼容性:Feature https://github.com/IvorySQL/IvorySQL/issues/768[#768] + + 支持 `NLS_DATE_FORMAT`、`NLS_TIMESTAMP_FORMAT` 等 Oracle 风格 NLS 设置。 -5. 实现文档构建与网站更新自动化:特性 https://github.com/IvorySQL/ivorysql_docs/issues/115[#115] -+ -通过 Pull Request (PR) 自动触发文档构建及官方网站内容更新流程。 +- 空字符串转 NULL:Feature https://github.com/IvorySQL/IvorySQL/issues/769[#769] + + 将长度为零的字符串转换为 NULL,以遵循 Oracle 的兼容规则。 -6. 改进贡献者工作流程,通过 /assign 命令自我分配任务:特性 https://github.com/IvorySQL/ivorysql_docs/issues/109[#109] +- 解析器切换能力:Feature https://github.com/IvorySQL/IvorySQL/issues/770[#770] + + 可在 Oracle 与 PostgreSQL 解析器之间切换,实现会话级灵活性。 -7. IvorySQL Operator V4 适配 IvorySQL 4.5:特性 https://github.com/IvorySQL/ivory-operator/pull/79[#79] +- GB18030 数据库编码:Feature https://github.com/IvorySQL/IvorySQL/issues/771[#771] + + 为中国市场提供 GB18030 初始化和创建数据库选项。 -== 源代码 +- Oracle 兼容 `SYS_GUID`:Feature https://github.com/IvorySQL/IvorySQL/issues/773[#773] + + 实现 Oracle `SYS_GUID` 函数,生成基于 RAW 的 GUID。 -IvorySQL主要包含2个代码仓库: +- Oracle 兼容 `SYS_CONTEXT`:Feature https://github.com/IvorySQL/IvorySQL/issues/774[#774] + + 提供 Oracle `SYS_CONTEXT` API,用于查询会话与环境元数据。 -* IvorySQL数据库源码: https://github.com/IvorySQL/IvorySQL -* IvorySQL官方网站: https://github.com/IvorySQL/Ivory-www +- Oracle 兼容 `USERENV`:Feature https://github.com/IvorySQL/IvorySQL/issues/775[#775] + + 引入 `USERENV` 函数,使会话可检查 Oracle 风格的上下文信息。 -== 贡献人员 -以下个人(按姓氏排序)作为补丁作者、提交者、审查者、测试者或问题报告者为此版本做出了贡献。 +- Oracle 兼容函数语法:Feature https://github.com/IvorySQL/IvorySQL/issues/776[#776] + + 支持 EDITIONABLE/NONEDITIONABLE、`RETURN`、`IS`、`OUT ... NOCOPY` 等 Oracle 结构。 -- Cary Huang -- Denis Lussier -- Flyingbeecd -- Grant Zhou -- 高雪玉 -- 矫顺田 -- 纪虎林 -- 梁翔宇 -- 吕新杰 -- 牛世继 -- 潘振浩 -- 石卓妍 -- 隋戈 -- 陶郑 -- 王康 -- 王守波 -- 杨世华 -- 严少安 -- 赵法威 -- 邹仁利 \ No newline at end of file +- Oracle 兼容过程语法:Feature https://github.com/IvorySQL/IvorySQL/issues/777[#777] + + 支持包含 Oracle 选项的过程 DDL、EXEC 调用以及 ALTER PROCEDURE。 + +- libpq OUT 参数传递:Feature https://github.com/IvorySQL/IvorySQL/issues/778[#778] + + 扩展客户端协议,使 OUT 参数可像 OCI 一样被消费。 + +- 过程 OUT 参数:Feature https://github.com/IvorySQL/IvorySQL/issues/779[#779] + + 存储过程现在可按 Oracle 约定声明 IN、OUT、IN OUT 模式。 + +- 函数 OUT 参数:Feature https://github.com/IvorySQL/IvorySQL/issues/780[#780] + + 函数支持 Oracle 风格 OUT(含 IN OUT)参数。 + +- 嵌套子程序:Feature https://github.com/IvorySQL/IvorySQL/issues/781[#781] + + 允许在子程序内部定义函数或过程,并支持重载。 + +- Oracle 兼容 `INSTR`:Feature https://github.com/IvorySQL/IvorySQL/issues/782[#782] + + 与 Oracle `INSTR` 行为保持一致,覆盖子串搜索与位置判断。 + +- Oracle 兼容 FORCE VIEW:Feature https://github.com/IvorySQL/IvorySQL/issues/783[#783] + + 允许在引用对象尚未存在时创建视图,重现 Oracle FORCE 选项。 + +- Oracle 兼容 LIKE 运算符:Feature https://github.com/IvorySQL/IvorySQL/issues/784[#784] + + 对齐 Oracle 的通配符语义,确保匹配行为可预测。 + +=== 在线体验与多平台发行包 + +- 在线体验:IvorySQL v5.0:Feature https://github.com/IvorySQL/IvorySQL/issues/887[#887] + + 上线交互式浏览器环境,用户可实时体验与评估 IvorySQL v5.0,无需安装。 + +- 全平台打包:Feature https://github.com/IvorySQL/IvorySQL/issues/949[#949] + + 为 X86、ARM、MIPS、龙芯架构等平台提供多架构安装介质。 + +=== 云原生与容器化 + +- 容器化部署支持(Docker Compose & Docker Swarm): + 支持在 Docker Swarm 与 Docker Compose 中部署单实例数据库与高可用集群。 + +- 容器化部署支持(Kubernetes 基础版): + 使用 Helm 在 Kubernetes(K8S)中部署单实例数据库与高可用集群。 + +- 发布 IvorySQL Operator v5(Kubernetes 进阶版): + Operator v5 适配 IvorySQL v5.0,并同步升级系统组件版本与数据库扩展版本。 + +- 发布 IvorySQL Cloud v5(统一全生命周期与可视化控制平面): + 提供可视化托管控制平面,覆盖订阅、全生命周期编排以及生态集成。 + +=== 新增 10 个 PostgreSQL 扩展 + +- pg_cron:Feature https://github.com/IvorySQL/IvorySQL/issues/882[#882] + + 通过 pg_cron 集成在数据库层内执行计划任务。 + +- pgAudit:Feature https://github.com/IvorySQL/IvorySQL/issues/929[#929] + + 借助 PostgreSQL 标准日志能力输出详尽的会话 / 对象审计记录。 + +- PostGIS:Feature https://github.com/IvorySQL/IvorySQL/issues/880[#880] + + 提供空间数据处理与地理分析能力。 + +- pgRouting:Feature https://github.com/IvorySQL/IvorySQL/issues/881[#881] + + 引入网络与路径分析能力。 + +- PGroonga:Feature https://github.com/IvorySQL/IvorySQL/issues/879[#879] + + 增强全文检索。 + +- ddlx:Feature https://github.com/IvorySQL/IvorySQL/issues/877[#877] + + 支持 ddlx,便于高级模式洞察与自动化 DDL 生成。 + +- pgsql-http:Feature https://github.com/IvorySQL/IvorySQL/issues/883[#883] + + 允许数据库内部发起 HTTP/HTTPS 请求,实现与外部 Web 服务的无缝通信。 + +- system_stats:Feature https://github.com/IvorySQL/IvorySQL/issues/946[#946] + + 通过 system_stats 提供系统级统计信息。 + +- plpgsql_check:Feature https://github.com/IvorySQL/IvorySQL/issues/915[#915] + + 在运行前对 PL/pgSQL 函数做静态分析,定位错误、警告与潜在问题。 + +- pgvector:Feature https://github.com/IvorySQL/IvorySQL/issues/878[#878] + + 融合 pgvector,为 AI/ML 工作负载提供原生向量相似度检索。 + +== 缺陷修复 + +- 修复 `unused_oids` 与 `duplicate_oids` 目录工具,使头文件扫描能准确检测冲突且无误报:Issue https://github.com/IvorySQL/IvorySQL/issues/841[#841] +- 为 `libpq/ivytest` 产物新增 `.gitignore`,避免生成的二进制与日志污染开发树:Issue https://github.com/IvorySQL/IvorySQL/issues/843[#843] +- 扩展 GitHub 工作流回归测试,覆盖 `--with-libnuma` 配置,防止未来在启用 NUMA 的主机上出问题:Issue https://github.com/IvorySQL/IvorySQL/issues/869[#869] +- 让 `psql` 用户可以通过 `\h create package` 获取 CREATE PACKAGE 语法帮助,补齐 PL/iSQL 包的 CLI 文档:Issue https://github.com/IvorySQL/IvorySQL/issues/936[#936] +- 排除 MainLoop 悬空指针引发的并发压力下间歇性段错误:Issue https://github.com/IvorySQL/IvorySQL/issues/898[#898] +- 修复 `oracle_test/modules/*/sql` 的测试框架假设,让 Oracle 兼容测试套件再次端到端运行:Issue https://github.com/IvorySQL/IvorySQL/issues/897[#897] +- 更新 `README.md` 与 `README_CN.md`,同步 IvorySQL v5 特性、打包与快速上手信息:Issue https://github.com/IvorySQL/IvorySQL/issues/896[#896] +- 更正全局唯一索引的强制机制,使相关回归测试在所有支持平台上稳定通过:Issue https://github.com/IvorySQL/IvorySQL/issues/894[#894] + +== 源码仓库 + +IvorySQL 的主要代码仓库: + +- IvorySQL 数据库源码:https://github.com/IvorySQL/IvorySQL +- IvorySQL 官网:https://github.com/IvorySQL/Ivory-www +- IvorySQL 文档:https://github.com/IvorySQL/IvorySQL-docs +- IvorySQL Docker:https://github.com/IvorySQL/docker_library + +== 贡献者名单 +以下人员(按字母顺序)作为补丁作者、提交者、审阅者、测试者或问题报告者,为本次发布做出了贡献。 + +* ccwxl +* Cédric Villemain +* elodiefb +* Grant Zhou +* Imran Zaheer +* luss +* Martin Gerhardy +* msdnchina +* omstack +* otegami +* rophy +* shlei6067 +* sjw1933 +* Yasir Hussain Shah +* 初少林 +* 崇鹏豪 +* 高雪玉 +* 矫顺田 +* 类延良 +* 李苑 +* 梁翔宇 +* 刘晓辉 +* 吕新杰 +* 牛世继 +* 彭冲 +* 潘振浩 +* 石卓妍 +* 隋戈 +* 陶郑 +* 童水森 +* 王硕 +* 薛晓刚 +* 严少安 +* 杨世华 +* 赵法威 \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/100.adoc b/CN/modules/ROOT/pages/master/100.adoc index 8636e48..8152d63 100644 --- a/CN/modules/ROOT/pages/master/100.adoc +++ b/CN/modules/ROOT/pages/master/100.adoc @@ -2083,7 +2083,7 @@ h| 参数名称 h| max_connections | 默认值 | 100 | 取值范围 | 1到262143 | 参数单位 | -| 参数含义 | 本参数值指定了PostgreSQL数据库的最大连接数。本参数只能在PostgreSQL启动时设置。在流复制备库上,必须将本参数值设置为与主库相同或者比主库参数值大,否则,备用服务器将不允许查询操作。 +| 参数含义 | 本参数值指定了PostgreSQL数据库的最大连接数。本参数只能在PostgreSQL启动时设置。在流复制备库上,必须将本参数值设置为与主库相同或者比主库参数值大,否则,后备服务器将不允许查询操作。 | 是否可session级修改 | 否 | 修改后何时生效 | 重启PG instance生效 @@ -2442,7 +2442,7 @@ h| 参数名称 h| max_standby_streaming_delay | 默认值 | 30000 | 取值范围 | -1到2147483647,-1表示允许standby server一直在等待直到冲突的query执行完毕。 | 参数单位 | 毫秒 -| 参数含义 | 本参数为备库参数,本参数在sending-server端会被忽略。当hot standby在被启用的状态下,本参数决定了standby server在取消掉standby中运行的且与WAL日志应用有冲突的查询语句之前的等待时间。本参数适用于wal data通过流复制被接收的情况。当不指定单位时,本参数的单位是毫秒。本参数仅能在postgresql.conf文件或者server command line中设置。本参数值与查询语句在取消之前可以运行的最长时间不同。相反,本参数值是从主服务器接收到WAL数据后允许应用该数据的最长总时间,因此,如果一个查询导致了显著的延迟,那么在备用服务器再次赶上之前,后续冲突查询的容忍时间(宽限时间)将少得多 +| 参数含义 | 本参数为备库参数,本参数在sending-server端会被忽略。当hot standby在被启用的状态下,本参数决定了standby server在取消掉standby中运行的且与WAL日志应用有冲突的查询语句之前的等待时间。本参数适用于wal data通过流复制被接收的情况。当不指定单位时,本参数的单位是毫秒。本参数仅能在postgresql.conf文件或者server command line中设置。本参数值与查询语句在取消之前可以运行的最长时间不同。相反,本参数值是从主服务器接收到WAL数据后允许应用该数据的最长总时间,因此,如果一个查询导致了显著的延迟,那么在后备服务器再次赶上之前,后续冲突查询的容忍时间(宽限时间)将少得多 | 是否可session级修改 | 否 | 修改后何时生效 | Reload即可生效 diff --git a/CN/modules/ROOT/pages/master/2.adoc b/CN/modules/ROOT/pages/master/2.adoc index aa12fc0..5688a4c 100644 --- a/CN/modules/ROOT/pages/master/2.adoc +++ b/CN/modules/ROOT/pages/master/2.adoc @@ -41,7 +41,7 @@ IvorySQL基于PostgreSQL,具有完整的SQL、坚如磐石的可靠性和庞 === 核心应用场景 -Ivory数据库的主要应用场景: +IvorySQL数据库的主要应用场景: * 企业数据库 @@ -69,16 +69,22 @@ IvorySQL是一个功能强大的开源对象关系数据库管理系统(ORDBMS) == 与Oracle的兼容性 -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/14[ivorysql框架设计] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/15[GUC框架] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/16[大小写转换] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/17[双模式设计] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/18[兼容Oracle like] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/19[兼容Oracle匿名块] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/20[兼容Oracle函数与存储过程] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/21[内置数据类型与内置函数] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/22[新增Oracle兼容模式的端口与IP] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/26[XML函数] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/27[兼容Oracle sequence] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/28[包] -* https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/29[不可见列] \ No newline at end of file +* 大小写转换 +* LIKE操作符 +* 匿名块 +* 函数与存储过程 +* 内置数据类型与内置函数 +* 端口与IP +* XML函数 +* sequence +* 包 +* 不可见列 +* RowID +* OUT 参数 +* %TYPE、%ROWTYPE +* NLS 参数 +* Force View +* 嵌套子函数 +* sys_guid 函数 +* 空字符串转null +* CALL INTO \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/23.adoc b/CN/modules/ROOT/pages/master/23.adoc index ec3f938..908c195 100644 --- a/CN/modules/ROOT/pages/master/23.adoc +++ b/CN/modules/ROOT/pages/master/23.adoc @@ -29,7 +29,7 @@ IvorySQL由一个核心开发团队维护,该团队拥有对GitHub上的IvoryS == **贡献者指南** -在贡献之前,我们需要了解下IvorySQL目前的版本以及文档的版本。目前,我们维护着4.5等版本,我们的版本紧跟PG的更新步伐,贡献之前请更新至最新版本。之后我们需要细心浏览一下贡献的样式风格,熟悉代码贡献风格、提Issue样式、拉取PR标题样式、代码注释样式、文档贡献样式、文章贡献样式,这可以帮助您尽快成为IvorySQL的贡献者奥~。 +在贡献之前,我们需要了解下IvorySQL目前的版本以及文档的版本。目前,我们维护着5.0等版本,我们的版本紧跟PG的更新步伐,贡献之前请更新至最新版本。之后我们需要细心浏览一下贡献的样式风格,熟悉代码贡献风格、提Issue样式、拉取PR标题样式、代码注释样式、文档贡献样式、文章贡献样式,这可以帮助您尽快成为IvorySQL的贡献者奥~。 === 贡献前的准备 @@ -473,7 +473,7 @@ Some more text # Another top-level heading ``` -正确释放 +正确示范 ``` # Title @@ -532,7 +532,7 @@ Some text here Some more text here ``` -正确释放: +正确示范: ``` Some text here diff --git a/CN/modules/ROOT/pages/master/3.1.adoc b/CN/modules/ROOT/pages/master/3.1.adoc index 2956fd5..3ae85bc 100644 --- a/CN/modules/ROOT/pages/master/3.1.adoc +++ b/CN/modules/ROOT/pages/master/3.1.adoc @@ -44,24 +44,24 @@ https://www.ionos.com/help/server-cloud-infrastructure/server-administration/cre 创建或编辑IvorySQL yum源配置文件/etc/yum.repos.d/ivorysql.repo ``` vim /etc/yum.repos.d/ivorysql.repo -[ivorysql4] -name=IvorySQL Server 4 $releasever - $basearch -baseurl=https://yum.highgo.com/dists/ivorysql-rpms/4/redhat/rhel-$releasever-$basearch +[ivorysql5] +name=IvorySQL Server 5 $releasever - $basearch +baseurl=https://yum.highgo.com/dists/ivorysql-rpms/5/redhat/rhel-$releasever-$basearch enabled=1 gpgcheck=0 ``` -保存退出后,安装IvorySQL4 +保存退出后,安装IvorySQL5 ``` -$ sudo dnf install -y IvorySQL-4.5 +$ sudo dnf install -y ivorysql5-5.0 ``` .... - 正确安装后,数据库将被安装在/opt/IvorySQL-4.5/路径下的IvorySQL-version(如:IvorySQL-4.5)文件夹内 + 正确安装后,数据库将被安装在/usr/ivory-5/路径下的IvorySQL-version(如:IvorySQL-5.0)文件夹内 .... 执行以下命令为ivorysql用户赋权: ``` -$ sudo chown -R ivorysql:ivorysql /opt/IvorySQL-4.5 +$ sudo chown -R ivorysql:ivorysql /usr/ivory-5 ``` [[配置环境变量]] ** 配置环境变量 @@ -70,9 +70,9 @@ $ sudo chown -R ivorysql:ivorysql /opt/IvorySQL-4.5 将以下配置写入~/.bash_profile文件并使用source命令该文件使环境变量生效: ``` -PATH=/opt/IvorySQL-4.5/bin:$PATH +PATH=/usr/ivory-5/bin:$PATH export PATH -PGDATA=/opt/IvorySQL-4.5/data +PGDATA=/usr/ivory-5/data export PGDATA ``` ``` @@ -82,7 +82,7 @@ $ source ~/.bash_profile ** 数据库初始化 ``` -$ initdb -D /opt/IvorySQL-4.5/data +$ initdb -D /usr/ivory-5/data ``` .... 其中-D参数用来指定数据库的数据目录。更多参数使用方法,请使用initdb --help命令获取。 @@ -91,16 +91,16 @@ $ initdb -D /opt/IvorySQL-4.5/data ** 启动数据库服务 ``` -$ pg_ctl -D /opt/IvorySQL-4.5/data -l ivory.log start +$ pg_ctl -D /usr/ivory-5/data -l ivory.log start ``` -其中-D参数用来指定数据库的数据目录,如果<<配置环境变量>> 配置了PGDATA,则该参数可以省略。-l参数用来指定日志目录。更多参数使用方法,请使用pg_ctl --help命令获取。 +其中-D参数用来指定数据库的数据目录,如果<<配置环境变量>> 配置了PGDATA,则该参数可以省略。-l参数用来指定日志文件。更多参数使用方法,请使用pg_ctl --help命令获取。 查看确认数据库启动成功: ``` $ ps -ef | grep postgres -ivorysql 3214 1 0 20:35 ? 00:00:00 /opt/IvorySQL-4.5/bin/postgres -D /opt/IvorySQL-4.5/data +ivorysql 3214 1 0 20:35 ? 00:00:00 /usr/ivory-5/bin/postgres -D /usr/ivory-5/data ivorysql 3215 3214 0 20:35 ? 00:00:00 postgres: checkpointer ivorysql 3216 3214 0 20:35 ? 00:00:00 postgres: background writer ivorysql 3218 3214 0 20:35 ? 00:00:00 postgres: walwriter @@ -113,19 +113,19 @@ ivorysql 3238 1551 0 20:35 pts/0 00:00:00 grep --color=auto postgres ** 从Docker Hub上获取IvorySQL镜像 ``` -$ docker pull ivorysql/ivorysql:4.5-ubi8 +$ docker pull ivorysql/ivorysql:5.0-ubi8 ``` ** 运行IvorySQL ``` -$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:4.5-ubi8 +$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:5.0-ubi8 ``` ** 查看IvorySQL容器运行是否成功 ``` $ docker ps | grep ivorysql CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -6faa2d0ed705 ivorysql:4.5-ubi8 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5866/tcp, 0.0.0.0:5434->5432/tcp ivorysql +6faa2d0ed705 ivorysql:5.0-ubi8 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5866/tcp, 0.0.0.0:5434->5432/tcp ivorysql ``` == 数据库连接 @@ -133,7 +133,7 @@ CONTAINER ID IMAGE COMMAND CREATED ST psql连接数据库: ``` $ psql -d -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# diff --git a/CN/modules/ROOT/pages/master/3.2.adoc b/CN/modules/ROOT/pages/master/3.2.adoc index beaf288..2ff6904 100644 --- a/CN/modules/ROOT/pages/master/3.2.adoc +++ b/CN/modules/ROOT/pages/master/3.2.adoc @@ -49,7 +49,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser .提示 **** -Solaris需要特别的处理。你必需使用`/usr/ucb/ps`而不是`/bin/ps`。 你还必需使用两个`w`标志,而不是一个。另外,你对`postgres`命令的最初调用必须用一个比服务器进程提供的短的`ps`状态显示。如果你没有满足全部三个要求,每个服务器进程的`ps`输出将是原始的`postgres`命令行。 command line。 +Solaris需要特别的处理。你必需使用`/usr/ucb/ps`而不是`/bin/ps`。 你还必需使用两个`w`标志,而不是一个。另外,你对`postgres`命令的最初调用必须用一个比服务器进程提供的短的`ps`状态显示。如果你没有满足全部三个要求,每个服务器进程的`ps`输出将是原始的`postgres`命令行。 **** === 统计收集器 @@ -148,7 +148,7 @@ IvorySQL也支持报告有关系统正在干什么的 动态信息,例如当 | `pid` | `integer` | 这个后端的进程 ID | `leader_pid` | `integer` | 并行组组长的进程ID,如果该进程是并行查询工作者。如果该进程是一个并行组的组长或不参与并行查询,则为`NULL`。 | `usesysid` | `oid` | 登录到这个后端的用户的 OID -| `usename` | `name` | 登录到这个后端的用户的 OID +| `usename` | `name` | 登录到这个后端的用户的 名称 | `application_name` | `text` | 连接到这个后端的应用的名称 | `client_addr` | `inet` | 连接到这个后端的客户端的 IP 地址。如果这个字段为空,它表示客户端通过服务器机器上的一个 Unix 套接字连接或者这是一个内部进程,如自动清理。 | `client_hostname` | `text` | 已连接的客户端的主机名,由 `client_addr` 的反向 DNS 查找报告。 这个字段将只对 IP 连接非空,并且只有log_hostname被启用时才会非空。 @@ -461,33 +461,34 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i ==== `pg_stat_replication` -`pg_stat_replication`视图将在每个WAL发送方进程中包含一行,显示关于复制到发送方连接的备用服务器的统计信息。 只有直接连接的备用设备被列出;没有关于下游备用服务器的信息。 +`pg_stat_replication`视图将在每个WAL发送方进程中包含一行,显示关于复制到发送方连接的后备服务器的统计信息。 只有直接连接的备用设备被列出;没有关于下游后备服务器的信息。 **表14.`pg_stat_replication` 视图** |==== -| 列类型描述 -|`pid` `integer`一个 WAL 发送进程的进程 ID -| `usesysid` `oid`登录到这个 WAL 发送进程的用户的 OID -| `usename` `name`登录到这个 WAL 发送进程的用户的名称 -| `application_name` `text`连接到这个 WAL 发送进程的应用的名称 -| `client_addr` `inet`连接到这个 WAL 发送进程的客户端的 IP 地址。 如果这个域为空,它表示该客户端通过服务器机器上的一个Unix 套接字连接。 -| `client_hostname` `text`连接上的客户端的主机名,由一次对`client_addr`的逆向 DNS 查找报告。 这个域将只对 IP 连接非空,并且只有在 log_hostname被启用时非空。 -| `client_port` `integer`客户端用来与这个 WAL 发送进程通讯的 TCP 端口号,如果使用 Unix 套接字则为`-1` -| `backend_start` `timestamp with time zone`这个进程开始的时间,即客户端是何时连接到这个WAL 发送进程的。 -| `backend_xmin` `xid`由hot_standby_feedback报告的这个后备机的`xmin`水平线。 -| `state` `text`当前的 WAL 发送进程状态。 可能的值是:`startup`: 这个WAL发送器正在启动。`catchup`: 这个WAL发送者连接的备用服务器正在赶上主服务器。`streaming`: 在其连接的备用服务器赶上主服务器之后,这个WAL发送方正在流化变化。`backup`: 这个WAL发送器正在发送一个备份。`stopping`: 这个WAL发送器正在停止。 -| `sent_lsn` `pg_lsn`在这个连接上发送的最后一个预写式日志的位置 -| `write_lsn` `pg_lsn`被这个后备服务器写入到磁盘的最后一个预写式日志的位置 -| `flush_lsn` `pg_lsn`被这个后备服务器刷入到磁盘的最后一个预写式日志的位置 -| `replay_lsn` `pg_lsn`被重放到这个后备服务器上的数据库中的最后一个预写式日志的位置 -| `write_lag` `interval`从本地刷新近期的WAL与接收到此备用服务器已写入WAL的通知(但尚未刷新或应用它)之间的时间经过。 如果将此服务器配置为同步备用服务器,则可以使用此参数来衡量在提交时`synchronous_commit`级别`remote_write`所导致的延迟。 -| `flush_lag` `interval`在本地刷写近期的WAL与接收到后备服务器已经写入并且刷写它(但还没有应用)的通知之间流逝的时间。 如果这台服务器被配置为一个同步后备,这可以用来计量在提交时`synchronous_commit`的级别`on`所导致的延迟。 -| `replay_lag` `interval`在本地刷写近期的WAL与接收到后备服务器已经写入它、刷写它并且应用它的通知之间流逝的时间。 如果这台服务器被配置为一个同步后备,这可以用来计量在提交时`synchronous_commit`的级别`remote_apply`所导致的延迟。 -| `sync_priority` `integer`在基于优先的同步复制中,这台后备服务器被选为同步后备的优先级。在基于规定数量的同步复制中,这个值没有效果。 -| `sync_state` `text`这一台后备服务器的同步状态。 可能的值是:`async`: 这台后备服务器是异步的。`potential`: 这台后备服务器现在是异步的,但可能在当前的同步后备失效时变成同步的。`sync`: 这台后备服务器是同步的。`quorum`: 这台后备服务器被当做规定数量后备服务器的候选。 -| `reply_time` `带时区的时间戳`从备用服务器收到的最后一条回复信息的发送时间 +| 列 | 类型 | 描述 +| `pid` | `integer` | 一个 WAL 发送进程的进程 ID +| `usesysid` | `oid` | 登录到这个 WAL 发送进程的用户的 OID +| `usename` | `name` | 登录到这个 WAL 发送进程的用户的名称 +| `application_name` | `text` | 连接到这个 WAL 发送进程的应用的名称 +| `client_addr` | `inet` | 连接到这个 WAL 发送进程的客户端的 IP 地址。 如果这个域为空,它表示该客户端通过服务器机器上的一个Unix 套接字连接。 +| `client_hostname` | `text` | 连接上的客户端的主机名,由一次对`client_addr`的逆向 DNS 查找报告。 这个域将只对 IP 连接非空,并且只有在 log_hostname被启用时非空。 +| `client_port` | `integer` | 客户端用来与这个 WAL 发送进程通讯的 TCP 端口号,如果使用 Unix 套接字则为`-1` +| `backend_start` | `timestamp with time zone` | 这个进程开始的时间,即客户端是何时连接到这个WAL 发送进程的。 +| `backend_xmin` | `xid` | 由hot_standby_feedback报告的这个后备机的`xmin`水平线。 +| `state` | `text` | 当前的 WAL 发送进程状态。 可能的值是:`startup`: 这个WAL发送器正在启动。`catchup`: 这个WAL发送者连接的后备服务器正在赶上主服务器。`streaming`: 在其连接的后备服务器赶上主服务器之后,这个WAL发送方正在流化变化。`backup`: 这个WAL发送器正在发送一个备份。`stopping`: 这个WAL发送器正在停止。 +| `sent_lsn` | `pg_lsn` | 在这个连接上发送的最后一个预写式日志的位置 +| `write_lsn` | `pg_lsn` | 被这个后备服务器写入到磁盘的最后一个预写式日志的位置 +| `flush_lsn` | `pg_lsn` | 被这个后备服务器刷入到磁盘的最后一个预写式日志的位置 +| `replay_lsn` | `pg_lsn` | 被重放到这个后备服务器上的数据库中的最后一个预写式日志的位置 +| `write_lag` | `interval` | 从本地刷新近期的WAL与接收到此后备服务器已写入WAL的通知(但尚未刷新或应用它)之间的时间经过。 如果将此服务器配置为同步后备服务器,则可以使用此参数来衡量在提交时`synchronous_commit`级别`remote_write`所导致的延迟。 +| `flush_lag` | `interval` | 在本地刷写近期的WAL与接收到后备服务器已经写入并且刷写它(但还没有应用)的通知之间流逝的时间。 如果这台服务器被配置为一个同步后备,这可以用来计量在提交时`synchronous_commit`的级别`on`所导致的延迟。 +| `replay_lag` | `interval` | 在本地刷写近期的WAL与接收到后备服务器已经写入它、刷写它并且应用它的通知之间流逝的时间。 如果这台服务器被配置为一个同步后备,这可以用来计量在提交时`synchronous_commit`的级别`remote_apply`所导致的延迟。 +| `sync_priority` | `integer` | 在基于优先的同步复制中,这台后备服务器被选为同步后备的优先级。在基于规定数量的同步复制中,这个值没有效果。 +| `sync_state` | `text` | 这一台后备服务器的同步状态。 可能的值是:`async`: 这台后备服务器是异步的。`potential`: 这台后备服务器现在是异步的,但可能在当前的同步后备失效时变成同步的。`sync`: 这台后备服务器是同步的。`quorum`: 这台后备服务器被当做规定数量后备服务器的候选。 +| `reply_time` | `带时区的时间戳` | 从后备服务器收到的最后一条回复信息的发送时间 |==== + `pg_stat_replication`视图中报告的滞后时间近期的WAL被写入、刷写并且重放以及发送器知道这一切所花的时间的度量。如果远程服务器被配置为一台同步后备,这些时间表示由每一种同步提交级别所带来(或者是可能带来)的提交延迟。对于一台异步后备,`replay_lag`列是最近的事务变得对查询可见的延迟时间的近似值。如果后备服务器已经完全追上了发送服务器并且没有WAL活动,在短时间内将继续显示最近测到的滞后时间,再然后就会显示为NULL。 对于物理复制会自动测量滞后时间。逻辑解码插件可能会选择性地发出跟踪消息,如果它们没有这样做,跟踪机制将把滞后显示为NULL。 @@ -503,22 +504,22 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表15.`pg_stat_wal_receiver` 视图** |==== -|列类型描述 -|`pid` `integer`WAL接收器进程的进程ID -| `status` `text`WAL接收进程的活动状态 -| `receive_start_lsn` `pg_lsn`WAL接收器启动时使用的第一个写前日志位置 -| `receive_start_tli` `integer`WAL接收器启动时使用的第一个时间线数字 -| `written_lsn` `pg_lsn`已经接收并写入磁盘的最后一个预写式日志位置,但没有刷入。这不能用于数据完整性检查。 -| `flushed_lsn` `pg_lsn`已经接收并刷入到磁盘的最后一个预写式日志位置,该字段的初始值是启动WAL接收器时使用的第一个日志位置 -| `received_tli` `integer`接收并刷入到磁盘的最后一个预写式日志位置的时间线数字,该字段的初始值为启动WAL接收器时使用的第一个日志位置的时间线数字 -| `last_msg_send_time` `timestamp with time zone`从源头WAL发送器收到的最后一条信息的发送时间 -| `last_msg_receipt_time` `timestamp with time zone`从源头WAL发送器收到的最后一条信息的接收时间 -| `latest_end_lsn` `pg_lsn`向源头WAL发送器报告的最后的预写式日志位置 -| `latest_end_time` `timestamp with time zone`向源头WAL发送方报告的最后一次写前日志位置的时间 -| `slot_name` `text`这个WAL接收器使用的复制槽的名称 -| `sender_host` `text`这个WAL接收器连接到的IvorySQL实例的主机。 这可以是主机名、IP地址,或者目录路径,如果连接是通过Unix套接字进行的。(路径的情况可以区分,因为它总是以`/`开头的绝对路径。) -| `sender_port` `integer`这个WAL接收器连接的IvorySQL实例的端口号。 -| `conninfo` `text`这个WAL接收器使用的连接字符串,对安全敏感的字段进行了模糊处理。 +|列|类型|描述 +|pid|integer|WAL接收器进程的进程ID +|status|text|WAL接收进程的活动状态 +|receive_start_lsn|pg_lsn|WAL接收器启动时使用的第一个写前日志位置 +|receive_start_tli|integer|WAL接收器启动时使用的第一个时间线数字 +|written_lsn|pg_lsn|已经接收并写入磁盘的最后一个预写式日志位置,但没有刷入。这不能用于数据完整性检查。 +|flushed_lsn|pg_lsn|已经接收并刷入到磁盘的最后一个预写式日志位置,该字段的初始值是启动WAL接收器时使用的第一个日志位置 +|received_tli|integer|接收并刷入到磁盘的最后一个预写式日志位置的时间线数字,该字段的初始值为启动WAL接收器时使用的第一个日志位置的时间线数字 +|last_msg_send_time|timestamp with time zone|从源头WAL发送器收到的最后一条信息的发送时间 +|last_msg_receipt_time|timestamp with time zone|从源头WAL发送器收到的最后一条信息的接收时间 +|latest_end_lsn|pg_lsn|向源头WAL发送器报告的最后的预写式日志位置 +|latest_end_time|timestamp with time zone|向源头WAL发送方报告的最后一次写前日志位置的时间 +|slot_name|text|这个WAL接收器使用的复制槽的名称 +|sender_host|text|这个WAL接收器连接到的IvorySQL实例的主机。这可以是主机名、IP地址,或者目录路径,如果连接是通过Unix套接字进行的。(路径的情况可以区分,因为它总是以/开头的绝对路径。) +|sender_port|integer|这个WAL接收器连接的IvorySQL实例的端口号。 +|conninfo|text|这个WAL接收器使用的连接字符串,对安全敏感的字段进行了模糊处理。 |==== ==== `pg_stat_subscription` @@ -527,16 +528,16 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表16.`pg_stat_subscription` 视图** |==== -|列类型描述 -|`subid` `oid`订阅的OID -| `subname` `name`订阅的名称 -| `pid` `integer`订阅工作者进程的进程ID -| `relid` `oid`工作器正在同步的关系的OID;Null用于主应用工作器 -| `received_lsn` `pg_lsn`接收到的最后一个预写式日志位置,该字段的初始值为0 -| `last_msg_send_time` `timestamp with time zone`从WAL发送器收到的最后一条信息的发送时间 -| `last_msg_receipt_time` `timestamp with time zone`从WAL发送器收到的最后一条信息的接收时间 -| `latest_end_lsn` `pg_lsn`向WAL发送器报告的最后预写式日志位置 -| `latest_end_time` `timestamp with time zone`向WAL发送器报告的最后一次预写式日志位置的时间 +|列|类型|描述 +|subid|oid|订阅的OID +|subname|name|订阅的名称 +|pid|integer|订阅工作者进程的进程ID +|relid|oid|工作器正在同步的关系的OID;Null用于主应用工作器 +|received_lsn|pg_lsn|接收到的最后一个预写式日志位置,该字段的初始值为0 +|last_msg_send_time|timestamp with time zone|从WAL发送器收到的最后一条信息的发送时间 +|last_msg_receipt_time|timestamp with time zone|从WAL发送器收到的最后一条信息的接收时间 +|latest_end_lsn|pg_lsn|向WAL发送器报告的最后预写式日志位置 +|latest_end_time|timestamp with time zone|向WAL发送器报告的最后一次预写式日志位置的时间 |==== ==== `pg_stat_ssl` @@ -545,15 +546,16 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表17.`pg_stat_ssl` 视图** |==== -| `pid` `integer`后端或WAL发送器进程ID -| `ssl` `boolean`如果在此连接上使用SSL,则为真 -| `version` `text`使用SSL的版本,如果此连接上没有使用SSL则为NULL -| `cipher` `text`正在使用的SSL密码的名称,如果此连接上没有使用SSL则为NULL -| `bits` `integer`使用的加密算法中的位数,如果此连接上没有使用SSL则为NULL -| `compression` `boolean`如果使用SSL压缩则为真,否则为假,如果此连接未使用SSL则为NULL -| `client_dn` `text`区别名称(DN,Distinguished Name)字段与使用的客户端证书,如果没有提供客户端证书或在此连接上没有使用SSL,则为NULL。 如果DN字段长于`NAMEDATALEN`(标准构建中为64个字符),则该字段将被截断。 -| `client_serial` `numeric`客户端证书的序列号,如果没有提供客户端证书或在此连接上没有使用SSL,则为NULL。 证书序列号和证书颁发者的组合唯一标识一个证书(除非颁发者错误地重用序列号)。 -| `issuer_dn` `text`客户端证书颁发者的区别名称(DN,Distinguished Name),如果没有提供客户端证书或在此连接上没有使用SSL,则为NULL。该字段像`client_dn`一样被截断。 +|列|类型|描述 +|pid|integer|后端或WAL发送器进程ID +|ssl|boolean|如果在此连接上使用SSL,则为真 +|version|text|使用SSL的版本,如果此连接上没有使用SSL则为NULL +|cipher|text|正在使用的SSL密码的名称,如果此连接上没有使用SSL则为NULL +|bits|integer|使用的加密算法中的位数,如果此连接上没有使用SSL则为NULL +|compression|boolean|如果使用SSL压缩则为真,否则为假,如果此连接未使用SSL则为NULL +|client_dn|text|区别名称(DN,Distinguished Name)字段与使用的客户端证书,如果没有提供客户端证书或在此连接上没有使用SSL,则为NULL。 如果DN字段长于NAMEDATALEN(标准构建中为64个字符),则该字段将被截断。 +|client_serial|numeric|客户端证书的序列号,如果没有提供客户端证书或在此连接上没有使用SSL,则为NULL。 证书序列号和证书颁发者的组合唯一标识一个证书(除非颁发者错误地重用序列号)。 +|issuer_dn|text|客户端证书颁发者的区别名称(DN,Distinguished Name),如果没有提供客户端证书或在此连接上没有使用SSL,则为NULL。该字段像client_dn一样被截断。 |==== ==== `pg_stat_gssapi` @@ -562,11 +564,11 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表18.`pg_stat_gssapi` 视图** |==== -| 列类型描述 -| `pid` `integer`后端进程ID -| `gss_authenticated` `boolean`如果此连接使用了GSSAPI身份验证,则为True -| `principal` `text`用于验证此连接的主体,如果未使用GSSAPI对此连接进行身份验证,则为NULL。 如果主体长度超过`NAMEDATALEN`(标准构建中为64个字符),则该字段被截断。 -| `encrypted` `boolean`如果在此连接上使用了GSSAPI加密,则为真 +|列|类型|描述 +|`pid`|`integer`|后端进程ID +|`gss_authenticated`|`boolean`|如果此连接使用了GSSAPI身份验证,则为True +|`principal`|`text`|用于验证此连接的主体,如果未使用GSSAPI对此连接进行身份验证,则为NULL。 如果主体长度超过`NAMEDATALEN`(标准构建中为64个字符),则该字段被截断。 +|`encrypted`|`boolean`|如果在此连接上使用了GSSAPI加密,则为真 |==== ==== `pg_stat_archiver` @@ -575,14 +577,14 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表19.`pg_stat_archiver` 视图** |==== -| 列类型描述 -| `archived_count` `bigint`已成功存档的WAL文件数 -| `last_archived_wal` `text`最后一个成功存档的WAL文件的名称 -| `last_archived_time` `timestamp with time zone`最后一次成功存档操作的时间 -| `failed_count` `bigint`记录WAL文件归档失败次数 -| `last_failed_wal` `text`最后一次失败的存档操作的WAL文件的名称 -| `last_failed_time` `timestamp with time zone`上次存档操作失败的时间 -| `stats_reset` `timestamp with time zone`这些统计数据最后一次重置的时间 +|列|类型|描述 +|`archived_count`|`bigint`|已成功存档的WAL文件数 +|`last_archived_wal`|`text`|最后一个成功存档的WAL文件的名称 +|`last_archived_time`|`timestamp with time zone`|最后一次成功存档操作的时间 +|`failed_count`|`bigint`|记录WAL文件归档失败次数 +|`last_failed_wal`|`text`|最后一次失败的存档操作的WAL文件的名称 +|`last_failed_time`|`timestamp with time zone`|上次存档操作失败的时间 +|`stats_reset`|`timestamp with time zone`|这些统计数据最后一次重置的时间 |==== ==== `pg_stat_bgwriter` @@ -591,18 +593,18 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表20.`pg_stat_bgwriter` 视图** |==== -| 列类型描述 -| `checkpoints_timed` `bigint`已执行的预定检查点数 -| `checkpoints_req` `bigint`请求已执行的检查点数 -| `checkpoint_write_time` `double precision`检查点处理中将文件写入磁盘的部分所花费的总时间,以毫秒为单位 -| `checkpoint_sync_time` `double precision`检查点处理中将文件同步到磁盘的部分所花费的总时间,以毫秒为单位 -| `buffers_checkpoint` `bigint`检查点期间写入的缓冲区数 -| `buffers_clean` `bigint`后台写入器写入的缓冲区数 -| `maxwritten_clean` `bigint`后台写入器因为写入太多缓冲区而停止清理扫描的次数 -| `buffers_backend` `bigint`后端直接写入的缓冲区数 -| `buffers_backend_fsync` `bigint`后端必须执行自己的`fsync`调用的次数(通常后台写入器处理这些,即使后端执行自己的写入) -| `buffers_alloc` `bigint`分配的缓冲区数 -| `stats_reset` `timestamp with time zone`这些统计数据最后一次重置的时间 +| 列 | 类型 | 描述 +| `checkpoints_timed` | `bigint` | 已执行的预定检查点数 +| `checkpoints_req` | `bigint` | 请求已执行的检查点数 +| `checkpoint_write_time` | `double precision` | 检查点处理中将文件写入磁盘的部分所花费的总时间,以毫秒为单位 +| `checkpoint_sync_time` | `double precision` | 检查点处理中将文件同步到磁盘的部分所花费的总时间,以毫秒为单位 +| `buffers_checkpoint` | `bigint` | 检查点期间写入的缓冲区数 +| `buffers_clean` | `bigint` | 后台写入器写入的缓冲区数 +| `maxwritten_clean` | `bigint` | 后台写入器因为写入太多缓冲区而停止清理扫描的次数 +| `buffers_backend` | `bigint` | 后端直接写入的缓冲区数 +| `buffers_backend_fsync` | `bigint` | 后端必须执行自己的 `fsync` 调用的次数(通常后台写入器处理这些,即使后端执行自己的写入) +| `buffers_alloc` | `bigint` | 分配的缓冲区数 +| `stats_reset` | `timestamp with time zone` | 这些统计数据最后一次重置的时间 |==== ==== `pg_stat_database` @@ -611,28 +613,28 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表21. `pg_stat_database` 视图** |==== -| 列类型描述 -| `datid` `oid`该数据库的OID,属于共享关系的对象为0 -| `datname` `name`这个数据库的名称,或者共享对象为`NULL`。 -| `numbackends` `integer`当前连接到此数据库的后端数,对于共享对象则为`NULL`。 这是该视图中唯一返回反映当前状态的值的列;所有其他列返回自上次重置以来累积的值。 -| `xact_commit` `bigint`此数据库中已提交的事务数 -| `xact_rollback` `bigint`该数据库中已回滚的事务数 -| `blks_read` `bigint`在该数据库中读取的磁盘块数 -| `blks_hit` `bigint`在缓存中发现磁盘块的次数,因此读取不是必需的(这只包括在IvorySQL缓存中,而不是在操作系统的文件系统缓存中) -| `tup_returned` `bigint`这个数据库中查询返回的行数 -| `tup_fetched` `bigint`这个数据库中查询获取的行数 -| `tup_inserted` `bigint`查询在该数据库中插入的行数 -| `tup_updated` `bigint`这个数据库中查询更新的行数 -| `tup_deleted` `bigint`这个数据库中被查询删除的行数 -| `conflicts` `bigint`由于与此数据库中的恢复冲突而取消的查询数。(冲突只发生在备用服务器上) -| `temp_files` `bigint`这个数据库中查询创建的临时文件的数量。所有临时文件都将被计数,而不顾及临时文件为什么被创建(例如,排序或散列),也不考虑log_temp_files设置。 -| `temp_bytes` `bigint`这个数据库中的查询写入临时文件的数据总量。所有临时文件都将被计数,而不考虑临时文件为什么被创建,也不考虑log_temp_files设置。 -| `deadlocks` `bigint`在此数据库中检测到的死锁数 -| `checksum_failures` `bigint`在此数据库(或共享对象)中检测到的数据页校验码失败数,如果没有启用数据校验码则为NULL。 -| `checksum_last_failure` `timestamp with time zone`在此数据库(或共享对象)中检测到最后一个数据页校验码失败的时间,如果没有启用数据校验码则为NULL。 -| `blk_read_time` `double precision`在这个数据库中通过后端读取数据文件块所花费的时间,以毫秒为单位(如果启用了track_io_timing,否则为零) -| `blk_write_time` `double precision`在这个数据库中通过后端写数据文件块所花费的时间,以毫秒为单位(如果启用了track_io_timing,否则为零) -| `stats_reset` `timestamp with time zone`这些统计数据最后一次重置的时间 +| 列 | 类型 | 描述 +| `datid` | `oid` | 该数据库的 OID,属于共享关系的对象为 0 +| `datname` | `name` | 这个数据库的名称,或者共享对象为 `NULL` +| `numbackends` | `integer` | 当前连接到此数据库的后端数,对于共享对象则为 `NULL`。这是该视图中唯一返回反映当前状态的值的列;所有其他列返回自上次重置以来累积的值 +| `xact_commit` | `bigint` | 此数据库中已提交的事务数 +| `xact_rollback` | `bigint` | 该数据库中已回滚的事务数 +| `blks_read` | `bigint` | 在该数据库中读取的磁盘块数 +| `blks_hit` | `bigint` | 在缓存中发现磁盘块的次数,因此读取不是必需的(这只包括在 IvorySQL 缓存中,而不是在操作系统的文件系统缓存中) +| `tup_returned` | `bigint` | 这个数据库中查询返回的行数 +| `tup_fetched` | `bigint` | 这个数据库中查询获取的行数 +| `tup_inserted` | `bigint` | 查询在该数据库中插入的行数 +| `tup_updated` | `bigint` | 这个数据库中查询更新的行数 +| `tup_deleted` | `bigint` | 这个数据库中被查询删除的行数 +| `conflicts` | `bigint` | 由于与此数据库中的恢复冲突而取消的查询数(冲突只发生在后备服务器上) +| `temp_files` | `bigint` | 这个数据库中查询创建的临时文件的数量。所有临时文件都将被计数,而不顾及临时文件为什么被创建(例如,排序或散列),也不考虑 log_temp_files 设置 +| `temp_bytes` | `bigint` | 这个数据库中的查询写入临时文件的数据总量。所有临时文件都将被计数,而不考虑临时文件为什么被创建,也不考虑 log_temp_files 设置 +| `deadlocks` | `bigint` | 在此数据库中检测到的死锁数 +| `checksum_failures` | `bigint` | 在此数据库(或共享对象)中检测到的数据页校验码失败数,如果没有启用数据校验码则为 NULL +| `checksum_last_failure` | `timestamp with time zone` | 在此数据库(或共享对象)中检测到最后一个数据页校验码失败的时间,如果没有启用数据校验码则为 NULL +| `blk_read_time` | `double precision` | 在这个数据库中通过后端读取数据文件块所花费的时间,以毫秒为单位(如果启用了 track_io_timing,否则为零) +| `blk_write_time` | `double precision` | 在这个数据库中通过后端写数据文件块所花费的时间,以毫秒为单位(如果启用了 track_io_timing,否则为零) +| `stats_reset` | `timestamp with time zone` | 这些统计数据最后一次重置的时间 |==== ==== `pg_stat_database_conflicts` @@ -641,13 +643,14 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表22.`pg_stat_database_conflicts` 视图** |==== -|`datid` `oid`数据库的OID -| `datname` `name`数据库的名称 -| `confl_tablespace` `bigint`这个数据库中由于删除表空间而取消的查询的数量 -| `confl_lock` `bigint`此数据库中由于锁定超时而被取消的查询数 -| `confl_snapshot` `bigint`此数据库中由于旧快照而取消的查询数 -| `confl_bufferpin` `bigint`此数据库中由于固定缓冲区而被取消的查询数 -| `confl_deadlock` `bigint`此数据库中由于死锁而被取消的查询数 +| 列 | 类型 | 描述 +|`datid`|`oid`|数据库的OID +|`datname`|`name`|数据库的名称 +|`confl_tablespace`|`bigint`|这个数据库中由于删除表空间而取消的查询的数量 +|`confl_lock`|`bigint`|此数据库中由于锁定超时而被取消的查询数 +|`confl_snapshot`|`bigint`|此数据库中由于旧快照而取消的查询数 +|`confl_bufferpin`|`bigint`|此数据库中由于固定缓冲区而被取消的查询数 +|`confl_deadlock`|`bigint`|此数据库中由于死锁而被取消的查询数 |==== ==== `pg_stat_all_tables` @@ -656,30 +659,30 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表23.`pg_stat_all_tables` 视图** |==== -| 列类型描述 -| `relid` `oid`表的OID -| `schemaname` `name`该表所在的模式的名称 -| `relname` `name`这个表的名称 -| `seq_scan` `bigint`在此表上启动的顺序扫描数 -| `seq_tup_read` `bigint`连续扫描获取的实时行数 -| `idx_scan` `bigint`对这个表发起的索引扫描数 -| `idx_tup_fetch` `bigint`索引扫描获取的实时行数 -| `n_tup_ins` `bigint`插入的行数 -| `n_tup_upd` `bigint`更新的行数(包括HOT更新的行) -| `n_tup_del` `bigint`删除的行数 -| `n_tup_hot_upd` `bigint`HOT更新的行数(即,不需要单独的索引更新) -| `n_live_tup` `bigint`活的行的估计数量 -| `n_dead_tup` `bigint`僵死行的估计数量 -| `n_mod_since_analyze` `bigint`自上次分析此表以来修改的行的估计数量 -| `n_ins_since_vacuum` `bigint`自上次清空此表以来插入的行的估计数量 -| `last_vacuum` `timestamp with time zone`最后一次手动清理这个表(不包括`VACUUM FULL`) -| `last_autovacuum` `timestamp with time zone`这个表最后一次被自动清理守护进程清理的时间 -| `last_analyze` `timestamp with time zone`上一次手动分析这个表 -| `last_autoanalyze` `timestamp with time zone`自动清理守护进程最后一次分析这个表 -| `vacuum_count` `bigint`这个表被手动清理的次数(`VACUUM FULL`不计数) -| `autovacuum_count` `bigint`这个表被autovacuum守护进程清理的次数 -| `analyze_count` `bigint`手动分析这个表的次数 -| `autoanalyze_count` `bigint`这个表被autovacuum守护进程分析的次数 +| 列 | 类型 | 描述 +| relid | oid | 表的OID +| schemaname | name | 该表所在的模式的名称 +| relname | name | 这个表的名称 +| seq_scan | bigint | 在此表上启动的顺序扫描数 +| seq_tup_read | bigint | 连续扫描获取的实时行数 +| idx_scan | bigint | 对这个表发起的索引扫描数 +| idx_tup_fetch | bigint | 索引扫描获取的实时行数 +| n_tup_ins | bigint | 插入的行数 +| n_tup_upd | bigint | 更新的行数(包括HOT更新的行) +| n_tup_del | bigint | 删除的行数 +| n_tup_hot_upd | bigint | HOT更新的行数(即不需要单独的索引更新) +| n_live_tup | bigint | 活的行的估计数量 +| n_dead_tup | bigint | 僵死行的估计数量 +| n_mod_since_analyze | bigint | 自上次分析此表以来修改的行的估计数量 +| n_ins_since_vacuum | bigint | 自上次清空此表以来插入的行的估计数量 +| last_vacuum | timestamp with time zone | 最后一次手动清理这个表(不包括VACUUM FULL) +| last_autovacuum | timestamp with time zone | 这个表最后一次被自动清理守护进程清理的时间 +| last_analyze | timestamp with time zone | 上一次手动分析这个表 +| last_autoanalyze | timestamp with time zone | 自动清理守护进程最后一次分析这个表 +| vacuum_count | bigint | 这个表被手动清理的次数(VACUUM FULL不计数) +| autovacuum_count | bigint | 这个表被autovacuum守护进程清理的次数 +| analyze_count | bigint | 手动分析这个表的次数 +| autoanalyze_count | bigint | 这个表被autovacuum守护进程分析的次数 |==== ==== `pg_stat_all_indexes` @@ -688,15 +691,15 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表24.`pg_stat_all_indexes` 视图** |==== -| 列类型描述 -| `relid` `oid`对于此索引的表的OID -| `indexrelid` `oid`这个索引的OID -| `schemaname` `name`这个索引所在的模式名称 -| `relname` `name`这个索引的表的名称 -| `indexrelname` `name`这个索引的名称 -| `idx_scan` `bigint`在这个索引上开启的索引扫描的数量 -| `idx_tup_read` `bigint`扫描此索引返回的索引项数 -| `idx_tup_fetch` `bigint`使用此索引进行简单索引扫描获取的活动表行数 +| 列 | 类型 | 描述 +| relid | oid | 这个索引所在表的OID +| indexrelid | oid | 这个索引的OID +| schemaname | name | 这个索引所在的模式名称 +| relname | name | 这个索引所在表的名称 +| indexrelname | name | 这个索引的名称 +| idx_scan | bigint | 在这个索引上开启的索引扫描的数量 +| idx_tup_read | bigint | 扫描此索引返回的索引项数 +| idx_tup_fetch | bigint | 使用此索引进行简单索引扫描获取的活动表行数 |==== 索引可以被简单索引扫描、“位图”索引扫描以及优化器使用。在一次位图扫描中,多个索引的输出可以被通过 AND 或 OR 规则组合,因此当使用一次位图扫描时难以将取得的个体堆行与特定的索引关联起来。因此,一次位图扫描会增加它使用的索引的`pg_stat_all_indexes`.`idx_tup_read`计数,并且为每个表增加`pg_stat_all_tables`.`idx_tup_fetch`计数,但是它不影响`pg_stat_all_indexes`.`idx_tup_fetch`。如果所提供的常量值不在优化器统计信息记录的范围之内,优化器也会访问索引来检查,因为优化器统计信息可能已经“不新鲜”了。 @@ -712,18 +715,18 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表25.`pg_statio_all_tables` 视图** |==== -| 列类型描述 -| `relid` `oid`表的OID -| `schemaname` `name`该表所在的模式名 -| `relname` `name`这个表的名称 -| `heap_blks_read` `bigint`从该表中读取的磁盘块的数量 -| `heap_blks_hit` `bigint`该表中的缓冲区命中数 -| `idx_blks_read` `bigint`从这个表上所有索引读取的磁盘块数 -| `idx_blks_hit` `bigint`这个表上所有索引中的缓冲区命中数 -| `toast_blks_read` `bigint`从这个表的TOAST表中读取的磁盘块的数量(如果有的话) -| `toast_blks_hit` `bigint`这个表的TOAST表中的缓冲区命中数(如果有的话) -| `tidx_blks_read` `bigint`从这个表的TOAST表索引中读取的磁盘块的数量(如果有的话) -| `tidx_blks_hit` `bigint`这个表的TOAST表索引中的缓冲区命中数(如果有的话) +| 列 | 类型 | 描述 +| `relid` | `oid` | 表的OID +| `schemaname` | `name` | 该表所在的模式名 +| `relname` | `name` | 这个表的名称 +| `heap_blks_read` | `bigint` | 从该表中读取的磁盘块的数量 +| `heap_blks_hit` | `bigint` | 该表中的缓冲区命中数 +| `idx_blks_read` | `bigint` | 从这个表上所有索引读取的磁盘块数 +| `idx_blks_hit` | `bigint` | 这个表上所有索引中的缓冲区命中数 +| `toast_blks_read` | `bigint` | 从这个表的TOAST表中读取的磁盘块的数量(如果有的话) +| `toast_blks_hit` | `bigint` | 这个表的TOAST表中的缓冲区命中数(如果有的话) +| `tidx_blks_read` | `bigint` | 从这个表的TOAST表索引中读取的磁盘块的数量(如果有的话) +| `tidx_blks_hit` | `bigint` | 这个表的TOAST表索引中的缓冲区命中数(如果有的话) |==== ==== `pg_statio_all_indexes` @@ -732,14 +735,14 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表26.`pg_statio_all_indexes` 视图** |==== -| 列类型描述 -| `relid` `oid`对这个索引的表的OID -| `indexrelid` `oid`这个索引的OID -| `schemaname` `name`索引所在的模式名称 -| `relname` `name`此索引的表的名称 -| `indexrelname` `name`这个索引的名称 -| `idx_blks_read` `bigint`从此索引中读取的磁盘块的数量 -| `idx_blks_hit` `bigint`此索引中的缓冲区命中数 +| 列 | 类型 | 描述 +| `relid` | `oid` | 这个索引所在表的OID +| `indexrelid` | `oid` | 这个索引的OID +| `schemaname` | `name` | 索引所在的模式名称 +| `relname` | `name` | 这个索引所在表的名称 +| `indexrelname` | `name` | 这个索引的名称 +| `idx_blks_read` | `bigint` | 从这个索引中读取的磁盘块数量 +| `idx_blks_hit` | `bigint` | 这个索引中的缓冲区命中数 |==== ==== `pg_statio_all_sequences` @@ -748,12 +751,12 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表27.`pg_statio_all_sequences` 视图** |==== -| 列类型描述 -| `relid` `oid`序列的OID -| `schemaname` `name`此序列所在的模式的名称 -| `relname` `name`此序列的名称 -| `blks_read` `bigint`从这个序列中读取的磁盘块的数量 -| `blks_hit` `bigint`在此序列中的缓冲区命中数 +| 列 | 类型 | 描述 +| relid | oid | 序列的OID +| schemaname | name | 此序列所在的模式的名称 +| relname | name | 此序列的名称 +| blks_read | bigint | 从这个序列中读取的磁盘块的数量 +| blks_hit | bigint | 在此序列中的缓冲区命中数 |==== ==== `pg_stat_user_functions` @@ -762,13 +765,13 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i **表28.`pg_stat_user_functions` 视图** |==== -| 列类型描述 -| `funcid` `oid`函数的OID -| `schemaname` `name`这个函数所在的模式的名称 -| `funcname` `name`这个函数的名称 -| `calls` `bigint`这个函数已经被调用的次数 -| `total_time` `double precision`在这个函数以及它所调用的其他函数中花费的总时间,以毫秒计 -| `self_time` `double precision`在这个函数本身花费的总时间,不包括被它调用的其他函数,以毫秒计 +| 列 | 类型 | 描述 +| funcid | oid | 函数的OID +| schemaname | name | 这个函数所在的模式的名称 +| funcname | name | 这个函数的名称 +| calls | bigint | 这个函数已经被调用的次数 +| total_time | double precision | 在这个函数以及它所调用的其他函数中花费的总时间,以毫秒计 +| self_time | double precision | 在这个函数本身花费的总时间,不包括被它调用的其他函数,以毫秒计 |==== ==== `pg_stat_slru` @@ -777,16 +780,16 @@ IvorySQL通过*SLRU*(simple least-recently-used,简单的最近-最少-使用) **表29.`pg_stat_slru` 视图** |==== -| 列类型描述 -| `name` `text`SLRU的名称 -| `blks_zeroed` `bigint`初始化期间被置零的块数 -| `blks_hit` `bigint`已经在SLRU中的磁盘块被发现的次数,因此不需要读取(这只包括SLRU中的命中,而不是操作系统的文件系统缓存) -| `blks_read` `bigint`为这个SLRU读取的磁盘块数 -| `blks_written` `bigint`为这个SLRU写入的磁盘块数 -| `blks_exists` `bigint`为这个SLRU检查是否存在的块数 -| `flushes` `bigint`此SLRU的脏数据刷新数 -| `truncates` `bigint`这个SLRU的截断数 -| `stats_reset` `timestamp with time zone`这些统计数据最后一次重置的时间 +| 列 | 类型 | 描述 +| name | text | SLRU的名称 +| blks_zeroed | bigint | 初始化期间被置零的块数 +| blks_hit | bigint | 已经在SLRU中的磁盘块被发现的次数,因此不需要读取(这只包括SLRU中的命中,而不是操作系统的文件系统缓存) +| blks_read | bigint | 为这个SLRU读取的磁盘块数 +| blks_written | bigint | 为这个SLRU写入的磁盘块数 +| blks_exists | bigint | 为这个SLRU检查是否存在的块数 +| flushes | bigint | 此SLRU的脏数据刷新数 +| truncates | bigint | 这个SLRU的截断数 +| stats_reset | timestamp with time zone | 这些统计数据最后一次重置的时间 |==== ==== Statistics Functions @@ -797,16 +800,16 @@ IvorySQL通过*SLRU*(simple least-recently-used,简单的最近-最少-使用) **表30.Additional Statistics Functions** |==== -| 函数描述 -| `pg_backend_pid` () → `integer`返回附加到当前会话的服务器进程的进程ID。 -| `pg_stat_get_activity` ( `integer` ) → `setof record`使用指定的进程ID返回有关后端信息的记录,如果指定了`NULL`,则返回系统中每个活动后端的一条记录。 返回的字段是`pg_stat_activity`视图中字段的子集。 -| `pg_stat_get_snapshot_timestamp` () → `timestamp with time zone`返回当前统计快照的时间戳。 -| `pg_stat_clear_snapshot` () → `void`丢弃当前的统计快照。 -| `pg_stat_reset` () → `void`将当前数据库的所有统计计数器重置为零。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 -| `pg_stat_reset_shared` ( `text` ) → `void`根据参数的不同,将一些集群范围的统计计数器重置为零。 参数可以是`bgwriter`来重置`pg_stat_bgwriter`视图中显示的所有计数器, 或者`archiver`来重置`pg_stat_archiver`视图中显示的所有计数器。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 -| `pg_stat_reset_single_table_counters` ( `oid` ) → `void`将当前数据库中单个表或索引的统计信息重置为零。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 -| `pg_stat_reset_single_function_counters` ( `oid` ) → `void`将当前数据库中单个函数的统计信息重置为零。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 -| `pg_stat_reset_slru` ( `text` ) → `void`将单个SLRU缓存或集群中所有SLRU的统计信息重置为零。 如果该参数为NULL,则所有SLRU缓存的`pg_stat_slru`视图中显示的计数器将被重置。 参数可以是`CommitTs`、`MultiXactMember`、`MultiXactOffset`、`Notify`、 `Serial`、`Subtrans`、 或`Xact`中的一个,以便只重置该条目的计数器。 如果参数是`other`(或实际上,任何无法识别的名称),那么所有其他SLRU缓存的计数器,如扩展定义的缓存,将被重置。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 +| 函数 | 描述 +| pg_backend_pid () → integer | 返回附加到当前会话的服务器进程的进程ID。 +| pg_stat_get_activity ( integer ) → setof record | 使用指定的进程ID返回有关后端信息的记录,如果指定了NULL,则返回系统中每个活动后端的一条记录。返回的字段是pg_stat_activity视图中字段的子集。 +| pg_stat_get_snapshot_timestamp () → timestamp with time zone | 返回当前统计快照的时间戳。 +| pg_stat_clear_snapshot () → void | 丢弃当前的统计快照。 +| pg_stat_reset () → void | 将当前数据库的所有统计计数器重置为零。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 +| pg_stat_reset_shared ( text ) → void | 根据参数的不同,将一些集群范围的统计计数器重置为零。参数可以是bgwriter来重置pg_stat_bgwriter视图中显示的所有计数器,或者archiver来重置pg_stat_archiver视图中显示的所有计数器。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 +| pg_stat_reset_single_table_counters ( oid ) → void | 将当前数据库中单个表或索引的统计信息重置为零。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 +| pg_stat_reset_single_function_counters ( oid ) → void | 将当前数据库中单个函数的统计信息重置为零。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 +| pg_stat_reset_slru ( text ) → void | 将单个SLRU缓存或集群中所有SLRU的统计信息重置为零。如果该参数为NULL,则所有SLRU缓存的pg_stat_slru视图中显示的计数器将被重置。参数可以是CommitTs、MultiXactMember、MultiXactOffset、Notify、Serial、Subtrans或Xact中的一个,以便只重置该条目的计数器。如果参数是other(或实际上,任何无法识别的名称),那么所有其他SLRU缓存的计数器,如扩展定义的缓存,将被重置。默认情况下该函数仅限于超级用户,但是其他用户可以被授予EXECUTE来运行此函数。 |==== `pg_stat_get_activity`是`pg_stat_activity`视图的底层函数, 它返回一个行集合,其中包含有关每个后端进程所有可用的信息。有时只获得该信息的一个子集可能会更方便。 在那些情况中,可以使用一组更老的针对每个后端的统计访问函数,这些显示在 表 31中。 这些访问函数使用一个后端 ID 号,范围从 1 到当前活动后端数目。 函数`pg_stat_get_backend_idset`提供了一种方便的方法为每个活动后端产生一行来调用这些函数。 例如,要显示PID以及所有后端当前的查询: @@ -819,19 +822,19 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, **表31.Per-Backend Statistics Functions** |==== -| 函数描述 -| `pg_stat_get_backend_idset` () → `setof integer`返回当前活动后端ID号的集合(从1到活动后端数)。 -| `pg_stat_get_backend_activity` ( `integer` ) → `text`返回此后端最近查询的文本。 -| `pg_stat_get_backend_activity_start` ( `integer` ) → `timestamp with time zone`返回后端最近一次查询开始的时间。 -| `pg_stat_get_backend_client_addr` ( `integer` ) → `inet`返回连接到此后端的客户端的IP地址。 -| `pg_stat_get_backend_client_port` ( `integer` ) → `integer`返回客户端用于通信的TCP端口号。 -| `pg_stat_get_backend_dbid` ( `integer` ) → `oid`返回此后端连接的数据库的OID。 -| `pg_stat_get_backend_pid` ( `integer` ) → `integer`返回此后端进程ID。 -| `pg_stat_get_backend_start` ( `integer` ) → `timestamp with time zone`返回该进程开始的时间。 -| `pg_stat_get_backend_userid` ( `integer` ) → `oid`返回登录到此后端的用户的OID。 -| `pg_stat_get_backend_wait_event_type` ( `integer` ) → `text`如果后端当前正在等待,返回等待事件类型名称,否则返回NULL。 -| `pg_stat_get_backend_wait_event` ( `integer` ) → `text`如果后端当前正在等待,则返回等待事件名称,否则为NULL。 -| `pg_stat_get_backend_xact_start` ( `integer` ) → `timestamp with time zone`返回后端当前事务开始的时间。 +| 函数 | 描述 +| pg_stat_get_backend_idset () → setof integer | 返回当前活动后端ID号的集合(从1到活动后端数)。 +| pg_stat_get_backend_activity ( integer ) → text | 返回此后端最近查询的文本。 +| pg_stat_get_backend_activity_start ( integer ) → timestamp with time zone | 返回后端最近一次查询开始的时间。 +| pg_stat_get_backend_client_addr ( integer ) → inet | 返回连接到此后端的客户端的IP地址。 +| pg_stat_get_backend_client_port ( integer ) → integer | 返回客户端用于通信的TCP端口号。 +| pg_stat_get_backend_dbid ( integer ) → oid | 返回此后端连接的数据库的OID。 +| pg_stat_get_backend_pid ( integer ) → integer | 返回此后端进程ID。 +| pg_stat_get_backend_start ( integer ) → timestamp with time zone | 返回该进程开始的时间。 +| pg_stat_get_backend_userid ( integer ) → oid | 返回登录到此后端的用户的OID。 +| pg_stat_get_backend_wait_event_type ( integer ) → text | 如果后端当前正在等待,返回等待事件类型名称,否则返回NULL。 +| pg_stat_get_backend_wait_event ( integer ) → text | 如果后端当前正在等待,则返回等待事件名称,否则为NULL。 +| pg_stat_get_backend_xact_start ( integer ) → timestamp with time zone | 返回后端当前事务开始的时间。 |==== == 查看锁 @@ -852,19 +855,19 @@ IvorySQL具有在命令执行过程中报告某些命令进度的能力。 目 **表32.`pg_stat_progress_analyze` 视图** |==== -| 列类型描述 -| `pid` `integer`后端的进程ID。 -| `datid` `oid`后端连接到的数据库的OID。 -| `datname` `name`后端连接到的数据库的名称。 -| `relid` `oid`被分析的表的OID。 -| `phase` `text`当前处理阶段。参见 http://www.postgresql.org/docs/17/progress-reporting.html#ANALYZE-PHASES[表 33]。 -| `sample_blks_total` `bigint`将被采样的堆块的总数。 -| `sample_blks_scanned` `bigint`扫描的堆块数量。 -| `ext_stats_total` `bigint`扩展统计信息的数量。 -| `ext_stats_computed` `bigint`已经计算的扩展统计的数量. 此计数器仅在 `computing extended statistics`阶段增进。 -| `child_tables_total` `bigint`子表的数量。 -| `child_tables_done` `bigint`扫描的子表数。此计数器只有在`acquiring inherited sample rows`阶段才会增进。 -| `current_child_table_relid` `oid`当前正在扫描的子表的OID。此字段仅在`acquiring inherited sample rows`时有效。 +|列 | 类型 | 描述 +|pid | integer | 后端的进程ID。 +|datid | oid | 后端连接到的数据库的OID。 +|datname | name | 后端连接到的数据库的名称。 +|relid | oid | 被分析的表的OID。 +|phase | text | 当前处理阶段。参见 表 33。 +|sample_blks_total | bigint | 将被采样的堆块的总数。 +|sample_blks_scanned | bigint | 扫描的堆块数量。 +|ext_stats_total | bigint | 扩展统计信息的数量。 +|ext_stats_computed | bigint | 已经计算的扩展统计的数量。此计数器仅在 computing extended statistics 阶段增进。 +|child_tables_total | bigint | 子表的数量。 +|child_tables_done | bigint | 扫描的子表数。此计数器只有在 acquiring inherited sample rows 阶段才会增进。 +|current_child_table_relid | oid | 当前正在扫描的子表的 OID。此字段仅在 acquiring inherited sample rows 时有效。 |==== **表33.ANALYZE phases** @@ -888,24 +891,24 @@ IvorySQL具有在命令执行过程中报告某些命令进度的能力。 目 每当运行`CREATE INDEX`或`REINDEX`时,`pg_stat_progress_create_index`视图将包含当前正在创建索引的每个后端的一行。 下面的表描述了将要报告的信息,并提供了关于如何解释它的信息。 **表34.`pg_stat_progress_create_index` 视图** -|==== -| 列类型描述 -| `pid` `integer`后端的进程ID。 -| `datid` `oid`后端连接到的数据库的OID。 -| `datname` `name`后端连接到的数据库的名称。 -| `relid` `oid`正在创建索引的表的OID。 -| `index_relid` `oid`正在创建或重建索引的OID。在非并发 `CREATE INDEX`的时候,此为 0。 -| `command` `text`在运行的命令: `CREATE INDEX`,`CREATE INDEX CONCURRENTLY`, `REINDEX`, 或 `REINDEX CONCURRENTLY`. -| `phase` `text`索引创建的当前处理阶段。 参见 http://www.postgresql.org/docs/17/progress-reporting.html#CREATE-INDEX-PHASES[表 35]。 -| `lockers_total` `bigint`在适用的情况下,需要等待的储物柜总数 -| `lockers_done` `bigint`已经等待的储物柜数量。 -| `current_locker_pid` `bigint`目前正在等待的储物柜的进程ID。 -| `blocks_total` `bigint`本阶段要处理的区块总数。 -| `blocks_done` `bigint`当前阶段已经处理的区块数量。 -| `tuples_total` `bigint`当前阶段要处理的元组总数。 -| `tuples_done` `bigint`在当前阶段已经处理的元组数量。 -| `partitions_total` `bigint`在分区表上创建索引时,该列被设置为要在其上创建索引的分区总数。 -| `partitions_done` `bigint`当在分区表上创建索引时,该列被设置为在其上完成索引的分区数。 +|==== +| 列 | 类型 | 描述 +| `pid` | `integer` | 后端的进程ID。 +| `datid` | `oid` | 后端连接到的数据库的OID。 +| `datname` | `name` | 后端连接到的数据库的名称。 +| `relid` | `oid` | 正在创建索引的表的OID。 +| `index_relid` | `oid` | 正在创建或重建索引的OID。在非并发 `CREATE INDEX` 的时候,此为 0。 +| `command` | `text` | 在运行的命令:`CREATE INDEX`、`CREATE INDEX CONCURRENTLY`、`REINDEX` 或 `REINDEX CONCURRENTLY`。 +| `phase` | `text` | 索引创建的当前处理阶段。参见 [表 35](http://www.postgresql.org/docs/17/progress-reporting.html#CREATE-INDEX-PHASES)。 +| `lockers_total` | `bigint` | 在适用的情况下,需要等待的储物柜总数。 +| `lockers_done` | `bigint` | 已经等待的储物柜数量。 +| `current_locker_pid` | `bigint` | 目前正在等待的储物柜的进程ID。 +| `blocks_total` | `bigint` | 本阶段要处理的区块总数。 +| `blocks_done` | `bigint` | 当前阶段已经处理的区块数量。 +| `tuples_total` | `bigint` | 当前阶段要处理的元组总数。 +| `tuples_done` | `bigint` | 在当前阶段已经处理的元组数量。 +| `partitions_total` | `bigint` | 在分区表上创建索引时,该列被设置为要在其上创建索引的分区总数。 +| `partitions_done` | `bigint` | 当在分区表上创建索引时,该列被设置为在其上完成索引的分区数。 |==== **表35.CREATE INDEX 的阶段** @@ -929,18 +932,18 @@ IvorySQL具有在命令执行过程中报告某些命令进度的能力。 目 **表36.`pg_stat_progress_vacuum` 视图** |==== -| 列类型描述 -| `pid` `integer`后端的进程ID。 -| `datid` `oid`这个后端连接的数据库的OID。 -| `datname` `name`这个后端连接的数据库的名称。 -| `relid` `oid`被vacuum的表的OID。 -| `phase` `text`vacuum的当前处理阶段。 -| `heap_blks_total` `bigint`该表中堆块的总数。这个数字在扫描开始时报告,之后增加的块将不会(并且不需要)被这个`VACUUM`访问。 -| `heap_blks_scanned` `bigint`被扫描的堆块数量。由于visibility map被用来优化扫描,一些块将被跳过而不做检查, 被跳过的块会被包括在这个总数中,因此当清理完成时这个数字最终将会等于`heap_blks_total`。 仅当处于`扫描堆`阶段时这个计数器才会前进。 -| `heap_blks_vacuumed` `bigint`被清理的堆块数量。除非表没有索引,这个计数器仅在处于`清理堆`阶段时才会前进。 不包含死亡元组的块会被跳过,因此这个计数器可能有时会向前跳跃一个比较大的增量。 -| `index_vacuum_count` `bigint`已完成的索引清理周期数。 -| `max_dead_tuples` `bigint`在需要执行一个索引清理周期之前我们可以存储的死亡元组数,取决于maintenance_work_mem。 -| `num_dead_tuples` `bigint`从上一个索引清理周期以来收集的死亡元组数。 +| 列 | 类型 | 描述 +| pid | integer | 后端的进程ID。 +| datid | oid | 这个后端连接的数据库的OID。 +| datname | name | 这个后端连接的数据库的名称。 +| relid | oid | 被vacuum的表的OID。 +| phase | text | vacuum的当前处理阶段。 +| heap_blks_total | bigint | 该表中堆块的总数。这个数字在扫描开始时报告,之后增加的块将不会(并且不需要)被这个VACUUM访问。 +| heap_blks_scanned | bigint | 被扫描的堆块数量。由于visibility map被用来优化扫描,一些块将被跳过而不做检查,被跳过的块会被包括在这个总数中,因此当清理完成时这个数字最终将会等于heap_blks_total。仅当处于扫描堆阶段时这个计数器才会前进。 +| heap_blks_vacuumed | bigint | 被清理的堆块数量。除非表没有索引,这个计数器仅在处于清理堆阶段时才会前进。不包含死亡元组的块会被跳过,因此这个计数器可能有时会向前跳跃一个比较大的增量。 +| index_vacuum_count | bigint | 已完成的索引清理周期数。 +| max_dead_tuples | bigint | 在需要执行一个索引清理周期之前我们可以存储的死亡元组数,取决于maintenance_work_mem。 +| num_dead_tuples | bigint | 从上一个索引清理周期以来收集的死亡元组数。 |==== **表37.VACUUM的阶段** @@ -961,20 +964,20 @@ IvorySQL具有在命令执行过程中报告某些命令进度的能力。 目 每当`CLUSTER`或`VACUUM FULL`运行时,`pg_stat_progress_cluster`视图将包含当前正在运行的每一个后台的记录。下面的表格描述了将被报告的信息,并提供了关于如何解释这些信息的信息。 **表38.`pg_stat_progress_cluster` 视图** -|==== -| 列类型描述 -| `pid` `integer`后台的进程ID。 -| `datid` `oid`该后端连接的数据库的OID。 -| `datname` `name`与此后端连接的数据库的名称。 -| `relid` `oid`被集群的表的OID。 -| `command` `text`正在运行的命令。`CLUSTER`或`VACUUM FULL`。 -| `phase` `text`当前处理阶段。 -| `cluster_index_relid` `oid`如果正在使用索引对表进行扫描,这就是正在使用的索引的OID;否则为0。 -| `heap_tuples_scanned` `bigint`扫描的堆元组数。 这个计数器只有在阶段为`seq scanning heap`,`index scanning heap` 或 `writing new heap`时才会增进。 -| `heap_tuples_written` `bigint`写入的堆元组的数量。这个计数器只有在阶段为`seq scanning heap`,`index scanning heap` 或 `writing new heap`时才会前进。 -| `heap_blks_total` `bigint`表中的堆块总数。 这个数字是在`seq scanning heap`的开始时报告的。 -| `heap_blks_scanned` `bigint`扫描的堆块数量。 这个计数器只有在阶段为`seq scanning heap`时才会增进。 -| `index_rebuild_count` `bigint`重建的索引数。 该计数器仅在`重建索引`阶段时才会增进。 +|==== +| 列 | 类型 | 描述 +| `pid` | `integer` | 后台的进程ID。 +| `datid` | `oid` | 该后端连接的数据库的OID。 +| `datname` | `name` | 与此后端连接的数据库的名称。 +| `relid` | `oid` | 被集群的表的OID。 +| `command` | `text` | 正在运行的命令。`CLUSTER`或`VACUUM FULL`。 +| `phase` | `text` | 当前处理阶段。 +| `cluster_index_relid` | `oid` | 如果正在使用索引对表进行扫描,这就是正在使用的索引的OID;否则为0。 +| `heap_tuples_scanned` | `bigint` | 扫描的堆元组数。这个计数器只有在阶段为`seq scanning heap`、`index scanning heap`或`writing new heap`时才会增进。 +| `heap_tuples_written` | `bigint` | 写入的堆元组的数量。这个计数器只有在阶段为`seq scanning heap`、`index scanning heap`或`writing new heap`时才会前进。 +| `heap_blks_total` | `bigint` | 表中的堆块总数。这个数字是在`seq scanning heap`的开始时报告的。 +| `heap_blks_scanned` | `bigint` | 扫描的堆块数量。这个计数器只有在阶段为`seq scanning heap`时才会增进。 +| `index_rebuild_count` | `bigint` | 重建的索引数。该计数器仅在`重建索引`阶段时才会增进。 |==== **表39.CLUSTER 和 VACUUM FULL 阶段** @@ -996,13 +999,13 @@ IvorySQL具有在命令执行过程中报告某些命令进度的能力。 目 **表40.`pg_stat_progress_basebackup` 视图** |==== -| 列类型描述 -| `pid` `integer`WAL发送方进程ID。 -| `phase` `text`目前的处理阶段。 -| `backup_total` `bigint`将被流输送的数据总量。这是在`streaming database files`阶段开始时的估计和报告。 注意,这只是一个近似值,因为在`streaming database files`阶段,数据库可能会改变,而WAL日志可能会在稍后的备份中包含。 一旦流数据量超过了估计的总大小,该值始终与`backup_streamed`相同。 如果在pg_basebackup中禁用估算(也就是说,指定了`--no-estimate-size`选项),这为`NULL`。 -| `backup_streamed` `bigint`数据流的总量。这个计数器只在`streaming database files`阶段或`transferring wal files`时增进。 -| `tablespaces_total` `bigint`要流输送的表空间总数。 -| `tablespaces_streamed` `bigint`流输送的表空间数。此计数器仅在`streaming database files`阶段增进。 +| 列 | 类型 | 描述 +| pid | integer | WAL发送方进程ID。 +| phase | text | 目前的处理阶段。 +| backup_total | bigint | 将被流输送的数据总量。这是在streaming database files阶段开始时的估计和报告。注意,这只是一个近似值,因为在streaming database files阶段,数据库可能会改变,而WAL日志可能会在稍后的备份中包含。一旦流数据量超过了估计的总大小,该值始终与backup_streamed相同。如果在pg_basebackup中禁用估算(也就是说,指定了--no-estimate-size选项),这为NULL。 +| backup_streamed | bigint | 数据流的总量。这个计数器只在streaming database files阶段或transferring wal files时增进。 +| tablespaces_total | bigint | 要流输送的表空间总数。 +| tablespaces_streamed | bigint | 流输送的表空间数。此计数器仅在streaming database files阶段增进。 |==== **表41.基础备份阶段** @@ -1018,7 +1021,7 @@ IvorySQL具有在命令执行过程中报告某些命令进度的能力。 目 == 动态追踪 -IvorySQL提供了功能来支持数据库服务器的动态追踪。这样就允许在代码中的特 定点上调用外部工具来追踪执行过程。 +IvorySQL提供了功能来支持数据库服务器的动态追踪。这样就允许在代码中的特定点上调用外部工具来追踪执行过程。 一些探针或追踪点已经被插入在源代码中。这些探针的目的是被数据库开发者和管理员使用。默认情况下,探针不被编译到IvorySQL中;用户需要显式地告诉配置脚本使得探针可用。 @@ -1030,7 +1033,7 @@ IvorySQL提供了功能来支持数据库服务器的动态追踪。这样就允 === 内建探针 -如表 42所示,源代码中提供了一些标准探针。表 43显式了在探针中使用的类型。当然,可以增加更多探针来增强IvorySQL的可观测性。 +如表 42所示,源代码中提供了一些标准探针。表 43显示了在探针中使用的类型。当然,可以增加更多探针来增强IvorySQL的可观测性。 **表42.内建 DTrace 探针** |==== diff --git a/CN/modules/ROOT/pages/master/3.3.adoc b/CN/modules/ROOT/pages/master/3.3.adoc index 601ee64..54d3428 100644 --- a/CN/modules/ROOT/pages/master/3.3.adoc +++ b/CN/modules/ROOT/pages/master/3.3.adoc @@ -81,7 +81,7 @@ autovacuum守护进程不会对分区表发出ANALYZE命令。继承性父表只 IvorySQL的 MVCC 事务语义依赖于能够比较事务 ID(XID)数字:如果一个行版本的插入 XID 大于当前事务的 XID,它就是“属于未来的”并且不应该对当前事务可见。但是因为事务 ID 的尺寸有限(32位),一个长时间(超过 40 亿个事务)运行的集簇会遭受到*事务 ID 回卷*问题:XID 计数器回卷到 0,并且本来属于过去的事务突然间就变成了属于未来 — 这意味着它们的输出变成不可见。简而言之,灾难性的数据丢失(实际上数据仍然在那里,但是如果你不能得到它也无济于事)。为了避免发生这种情况,有必要至少每 20 亿个事务就清理每个数据库中的每个表。 -周期性的清理能够解决该问题的原因是,`VACUUM`会把行标记为 冻结,这表示它们是被一个在足够远的过去提交的事务所插入,这样从 MVCC 的角度来看,效果就是该插入事务对所有当前和未来事务来说当然都 是可见的。IvorySQL保留了一个特殊的 XID (`FrozenTransactionId`),这个 XID 并不遵循普通 XID 的比较规则 并且总是被认为比任何普通 XID 要老。普通 XID 使用模-232算 法来比较。这意味着对于每一个普通 XID都有 20 亿个 XID “更老”并且 有 20 亿个“更新”,另一种解释的方法是普通 XID 空间是没有端点的环。因此,一旦一个行版本创建时被分配了一个特定的普通 XID,该行版本将成为接下 来 20 亿个事务的“过去”(与我们谈论的具体哪个普通 XID 无关)。如果在 20 亿个事务之后该行版本仍然存在,它将突然变得好像在未来。要阻止这一切 发生,被冻结行版本会被看成其插入 XID 为`FrozenTransactionId`, 这样它们对所有普通事务来说都是“在过去”,而不管回卷问题。并且这样的行版本将一直有效直到被删除,不管它有多旧。 +周期性的清理能够解决该问题的原因是,VACUUM会把行标记为 冻结,这表示它们是被一个在足够远的过去提交的事务所插入,这样从 MVCC 的角度来看,效果就是该插入事务对所有当前和未来事务来说当然都 是可见的。IvorySQL保留了一个特殊的 XID (FrozenTransactionId),这个 XID 并不遵循普通 XID 的比较规则 并且总是被认为比任何普通 XID 要老。普通 XID 使用模-232算 法来比较。这意味着对于每一个普通 XID都有 20 亿个 XID “更老”并且 有 20 亿个“更新”,另一种解释的方法是普通 XID 空间是没有端点的环。因此,一旦一个行版本创建时被分配了一个特定的普通 XID,该行版本将成为接下 来 20 亿个事务的“过去”(与我们谈论的具体哪个普通 XID 无关)。如果在 20 亿个事务之后该行版本仍然存在,它将突然变得好像在未来。要阻止这一切 发生,被冻结行版本会被看成其插入 XID 为`FrozenTransactionId`, 这样它们对所有普通事务来说都是“在过去”,而不管回卷问题。并且这样的行版本将一直有效直到被删除,不管它有多旧。 vacuum_freeze_min_age控制在其行版本被冻结前一个 XID 值应该有多老。如果被冻结的行将很快会被再次修改,增加这个设置可以避免不必要 的工作。但是减少这个设置会增加在表必须再次被清理之前能够流逝的事务数。 @@ -91,7 +91,7 @@ vacuum_freeze_min_age控制在其行版本被冻结前一个 XID 值应该有多 这意味着如果一个表没有被清理,大约每`autovacuum_freeze_max_age`减去`vacuum_freeze_min_age`事务就会在该表上调用一次自动清理。对那些为了空间回收目的而被正常清理的表,这是无关紧要的。然而,对静态表(包括接收插入但没有更新或删除的表)就没有为空间回收而清理的需要,因此尝试在非常大的静态表上强制自动清理的间隔最大化会非常有用。显然我们可以通过增加`autovacuum_freeze_max_age`或减少`vacuum_freeze_min_age`来实现此目的。 -`vacuum_freeze_table_age`的实际最大值是 0.95 * `autovacuum_freeze_max_age`,高于它的设置将被上限到最大值。一个高于`autovacuum_freeze_max_age`的值没有意义,因为不管怎样在那个点上都会触发一次防回卷自动清理,并且 0.95 的乘数为在防回卷自动清理发生之前运行一次手动`VACUUM`留出了一些空间。作为一种经验法则,`vacuum_freeze_table_age`应当被设置成一个低于`autovacuum_freeze_max_age`的值,留出一个足够的空间让一次被正常调度的`VACUUM`或一次被正常删除和更新活动触发的自动清理可以在这个窗口中被运行。将它设置得太接近可能导致防回卷自动清理,即使该表最近因为回收空间的目的被清理过,而较低的值将导致更频繁的全表扫描。 +vacuum_freeze_table_age的实际最大值是 0.95 * autovacuum_freeze_max_age,高于它的设置将被上限到最大值。一个高于`autovacuum_freeze_max_age`的值没有意义,因为不管怎样在那个点上都会触发一次防回卷自动清理,并且 0.95 的乘数为在防回卷自动清理发生之前运行一次手动`VACUUM`留出了一些空间。作为一种经验法则,`vacuum_freeze_table_age`应当被设置成一个低于`autovacuum_freeze_max_age`的值,留出一个足够的空间让一次被正常调度的`VACUUM`或一次被正常删除和更新活动触发的自动清理可以在这个窗口中被运行。将它设置得太接近可能导致防回卷自动清理,即使该表最近因为回收空间的目的被清理过,而较低的值将导致更频繁的全表扫描。 增加`autovacuum_freeze_max_age`(以及和它一起的`vacuum_freeze_table_age`)的唯一不足是数据库集簇的`pg_xact`和`pg_commit_ts`子目录将占据更多空间,因为它必须存储所有向后`autovacuum_freeze_max_age`范围内的所有事务的提交状态和(如果启用了`track_commit_timestamp`)时间戳。提交状态为每个事务使用两个二进制位,因此如果`autovacuum_freeze_max_age`被设置为它的最大允许值 20 亿,`pg_xact`将会增长到大约 0.5 吉字节,`pg_commit_ts`大约20GB。如果这对于你的总数据库尺寸是微小的,我们推荐设置`autovacuum_freeze_max_age`为它的最大允许值。否则,基于你想要允许`pg_xact`和`pg_commit_ts`使用的存储空间大小来设置它(默认情况下 2 亿个事务大约等于`pg_xact`的 50 MB存储空间,`pg_commit_ts`的2GB的存储空间)。 @@ -120,7 +120,7 @@ WARNING: database "mydb" must be vacuumed within 39985967 transactions HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database. ``` -(如该示意所建议的,一次手动的`VACUUM`应该会修复该问题;但是注意该次`VACUUM`必须由一个超级用户来执行,否则它将无法处理系统目录并且因而不能推进数据库的`datfrozenxid`)。如果这些警告被忽略,一旦距离回卷点只剩下 3 百万个事务时,该系统将会关闭并且拒绝开始任何新的事务: +(如该示例所建议的,一次手动的`VACUUM`应该会修复该问题;但是注意该次`VACUUM`必须由一个超级用户来执行,否则它将无法处理系统目录并且因而不能推进数据库的`datfrozenxid`)。如果这些警告被忽略,一旦距离回卷点只剩下 3 百万个事务时,该系统将会关闭并且拒绝开始任何新的事务: ``` ERROR: database is not accepting commands to avoid wraparound data loss in database "mydb" @@ -141,7 +141,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. IvorySQL有一个可选的但是被高度推荐的特性*autovacuum*,它的目的是自动执行`VACUUM`和`ANALYZE`命令。当它被启用时,自动清理会检查被大量插入、更新或删除元组的表。这些检查会利用统计信息收集功能,因此除非track_counts被设置为`true`,自动清理不能被使用。在默认配置下,自动清理是被启用的并且相关配置参数已被正确配置。 -“自动清理后台进程”实际上由多个进程组成。有一个称为 *自动清理启动器*的常驻后台进程, 它负责为所有数据库启动*自动清理工作者*进程。 启动器将把工作散布在一段时间上,它每隔 autovacuum_naptime秒尝试在每个数据库中启动一个工作者 (因此,如果安装中有*`N`*个数据库,则每 `autovacuum_naptime`/*`N`*秒将启动一个新的工作者)。 在同一时间只允许最多autovacuum_max_workers个工作者进程运行。如果有超过`autovacuum_max_workers` 个数据库需要被处理,下一个数据库将在第一个工作者结束后马上被处理。 每一个工作者进程将检查其数据库中的每一个表并且在需要时执行 `VACUUM`和/或`ANALYZE`。 可以设置log_autovacuum_min_duration来监控自动清理工作者的活动。 +“自动清理后台进程”实际上由多个进程组成。有一个称为 *自动清理启动器*的常驻后台进程, 它负责为所有数据库启动*自动清理工作者*进程。 启动器将把工作散布在一段时间上,它每隔 autovacuum_naptime秒尝试在每个数据库中启动一个工作者 (因此,如果安装中有*`N`*个数据库,则每 `autovacuum_naptime`/*N秒将启动一个新的工作者)。 在同一时间只允许最多autovacuum_max_workers个工作者进程运行。如果有超过`autovacuum_max_workers` 个数据库需要被处理,下一个数据库将在第一个工作者结束后马上被处理。 每一个工作者进程将检查其数据库中的每一个表并且在需要时执行 `VACUUM`和/或`ANALYZE`。 可以设置log_autovacuum_min_duration来监控自动清理工作者的活动。 如果在一小段时间内多个大型表都变得可以被清理,所有的自动清理工作者可能都会被占用来在一段长的时间内清理这些表。这将会造成其他的表和数据库无法被清理,直到一个工作者变得可用。对于一个数据库中的工作者数量并没有限制,但是工作者确实会试图避免重复已经被其他工作者完成的工作。注意运行着的工作者的数量不会被计入max_connections或superuser_reserved_connections限制。 @@ -249,15 +249,15 @@ https://pgbadger.darold.net/[pgBadger] 是一个外部项目,它可以进行 ==== 基于触发器的主-备复制 -基于触发器的复制通常会将修改数据的查询发送到指定的主服务器。它在逐个表的基础上工作,主服务器(通常)将数据更改异步发送到备用服务器。 主服务器运行时,备用服务器可以响应查询,并执行本地数据修改或写入操作。这种形式的复制通常用于减轻大数据分析型平台或者数据仓库查询负荷。 +基于触发器的复制通常会将修改数据的查询发送到指定的主服务器。它在逐个表的基础上工作,主服务器(通常)将数据更改异步发送到后备服务器。 主服务器运行时,后备服务器可以响应查询,并执行本地数据修改或写入操作。这种形式的复制通常用于减轻大数据分析型平台或者数据仓库查询负荷。 Slony-I是这种复制类型的一个例子。它使用表粒度,并且支持多个后备服务器。因为它会异步更新后备服务器(批量),在故障转移时可能会有数据丢失。 ==== 基于SQL的复制中间件 -通过基于SQL的复制中间件,一个程序拦截每一个 SQL 查询并把它发送给一个或所有服务器。每一个服务器独立地操作。读写查询必须被发送给所有服务器,这样每一个服务器都能接收到任何修改。但只读查询可以被只发送给一个服务器,这样允许读负载在服务器之间分布。 +通过基于SQL的复制中间件,一个程序拦截每一个 SQL 查询并把它发送给一个或所有服务器。每一个服务器独立地操作。读写查询必须被发送给所有服务器,这样每一个服务器都能接收到任何修改。但只读查询可以只发送给一个服务器,这样允许读负载在服务器之间分布。 -如果查询未经修改发送,则函数的`random()`随机值和`CURRENT_TIMESTAMP`函数的当前时间和序列值可能因不同服务器而异。 因为每个服务器独立运行,并且它发送 SQL 查询而没有真正的更改数据。如果这是不可接受的,那么中间件或应用程序必须从单一服务器源确定此类值,并将结果用于写入查询。 还必须注意确保所有服务器在提交或中止事务时都是相同的。这将涉及使用 两阶段提交PREPARE TRANSACTION和COMMIT PREPARED。 Pgpool-II和Continuent Tungsten就是这种复制的例子。 +如果查询未经修改发送,则函数`random()`函数的随机值和`CURRENT_TIMESTAMP`函数的当前时间和序列值可能因不同服务器而异。 因为每个服务器独立运行,并且它发送 SQL 查询而没有真正的更改数据。如果这是不可接受的,那么中间件或应用程序必须从单一服务器源确定此类值,并将结果用于写入查询。 还必须注意确保所有服务器在提交或中止事务时都是相同的。这将涉及使用 两阶段提交PREPARE TRANSACTION和COMMIT PREPARED。 Pgpool-II和Continuent Tungsten就是这种复制的例子。 ==== 异步多主控机复制 @@ -446,7 +446,7 @@ primary_slot_name = 'node_a_slot' 如果一台上游后备服务器被提升为新的主控机,且下游服务器的`recovery_target_timeline`被设置成`'latest'`(默认),下游服务器将继续从新的主控机得到流。 -要使用级联复制,要建立级联后备服务器让它能够接受复制连接(即设置max_wal_senders和hot_standby,并且配置基于主机的认证)。你还将需要设置下游后备服务器中的`primary_conninfo`指向级联后备服务器。 +要使用级联复制,需配置级联后备服务器以允许接收复制连接(即设置max_wal_senders和hot_standby,并且配置基于主机的认证)。你还将需要设置下游后备服务器中的`primary_conninfo`指向级联后备服务器。 ==== 同步复制 @@ -626,7 +626,7 @@ IvorySQL并不提供在主服务器上标识失败并且通知后备数据库服 在热备期间,参数`transaction_read_only`总是为真并且不可以被改变。但是只要不尝试修改数据库,热备期间的连接工作起来更像其他数据库连接。如果发生故障转移或切换,该数据库将切换到正常处理模式。当服务器改变模式时会话将保持连接。一旦热备结束,它将可以发起读写事务(即使是一个在热备期间启动的会话)。 -用户可以通过`SHOW in_hot_standby`来检查hot standby会话是否是活跃的 (在服务器版本 14 之前该参数`in_hot_standby`不存在。对于更早版本的服务器,可行的替代方法是 `SHOW transaction_read_only`。) 此外, 还有一些函数允许用户访问有关备用服务器的信息。 它们允许您编写程序来识别数据库当前的状态。用于监控恢复进度, 或者您可以编写复杂的程序将数据库恢复到特定状态。 +用户可以通过`SHOW in_hot_standby`来检查hot standby会话是否是活跃的 (在服务器版本 14 之前该参数`in_hot_standby`不存在。对于更早版本的服务器,可行的替代方法是 `SHOW transaction_read_only`。) 此外, 还有一些函数允许用户访问有关后备服务器的信息。 它们允许您编写程序来识别数据库当前的状态。用于监控恢复进度, 或者您可以编写复杂的程序将数据库恢复到特定状态。 ==== 处理查询冲突 @@ -690,7 +690,7 @@ LOG: database system is ready to accept read only connections 如果你正在运行基于文件的日志传送(“温备”),你可能需要等到下一个 WAL 文件到达,这可能和主服务器上的`archive_timeout`设置一样长。 -设置几个参数可确定用于跟踪事务ID、锁和预备事务的共享内存大小。备用服务器上的设置必须大于或等于主服务器上的设置,以确保在恢复过程中不会耗尽共享内存。例如,如果主数据库正在执行预备事务,而备用数据库没有获取共享内存来跟踪预备事务,则备用数据库将无法继续恢复,直到配置更改。受影响的参数是: +设置几个参数可确定用于跟踪事务ID、锁和预备事务的共享内存大小。后备服务器上的设置必须大于或等于主服务器上的设置,以确保在恢复过程中不会耗尽共享内存。例如,如果主数据库正在执行预备事务,而备用数据库没有获取共享内存来跟踪预备事务,则备用数据库将无法继续恢复,直到配置更改。受影响的参数是: - `max_connections` - `max_prepared_transactions` @@ -698,7 +698,7 @@ LOG: database system is ready to accept read only connections - `max_wal_senders` - `max_worker_processes` -确保这不是问题的可靠方法是使备用数据库上的这些参数的值等于或大于主数据库上的值。因此,如果您想增加这些值,您应该先更改备用服务器上的设置,然后再更改主服务器上的设置。相反,如果要减小这些值,则应先更改主服务器上的设置,然后再更改备用服务器上的设置。请记住,当一个备用数据库被提升时,它会成为后续备用数据库所需参数设置的新基准。因此,最好在所有备用服务器上保持这些设置相同,这样在切换/故障转移期间就不会出现问题。 +确保这不是问题的可靠方法是使备用数据库上的这些参数的值等于或大于主数据库上的值。因此,如果您想增加这些值,您应该先更改后备服务器上的设置,然后再更改主服务器上的设置。相反,如果要减小这些值,则应先更改主服务器上的设置,然后再更改后备服务器上的设置。请记住,当一个备用数据库被提升时,它会成为后续备用数据库所需参数设置的新基准。因此,最好在所有后备服务器上保持这些设置相同,这样在切换/故障转移期间就不会出现问题。 WAL 跟踪主节点上这些参数的变化。如果热备处理一个 WAL,表明主节点当前值大于备用数据库上的值,它将记录一个警告并中止恢复。例如: diff --git a/CN/modules/ROOT/pages/master/4.1.adoc b/CN/modules/ROOT/pages/master/4.1.adoc index c7bfc72..3818f81 100644 --- a/CN/modules/ROOT/pages/master/4.1.adoc +++ b/CN/modules/ROOT/pages/master/4.1.adoc @@ -24,49 +24,28 @@ IvorySQL安装方式包括以下5种: 创建或编辑IvorySQL yum源配置文件/etc/yum.repos.d/ivorysql.repo ``` vim /etc/yum.repos.d/ivorysql.repo -[ivorysql4] -name=IvorySQL Server 4 $releasever - $basearch -baseurl=https://yum.highgo.com/dists/ivorysql-rpms/4/redhat/rhel-$releasever-$basearch +[ivorysql5] +name=IvorySQL Server 5 $releasever - $basearch +baseurl=https://yum.highgo.com/dists/ivorysql-rpms/5/redhat/rhel-$releasever-$basearch enabled=1 gpgcheck=0 ``` -保存退出后,安装IvorySQL4 +保存退出后,安装IvorySQL5 ``` -$ sudo dnf install -y IvorySQL-4.5 +$ sudo dnf install -y ivorysql5-5.0 ``` -** 查看安装结果 -``` -dnf search IvorySQL -``` -查看结果说明如下: -|==== -| 序号 | 包名 | 描述 -| 1 | ivorysql4.x86_64 | IvorySQL客户端程序和库文件 -| 2 | ivorysql4-contrib.x86_64 | 随IvorySQL发布的已贡献的源代码和二进制文件 -| 3 | ivorysql4-devel.x86_64 | IvorySQL开发头文件和库 -| 4 | ivorysql4-docs.x86_64 | IvorySQL的额外文档 -| 5 | ivorysql4-libs.x86_64 | 所有IvorySQL客户端所需的共享库 -| 6 | ivorysql4-llvmjit.x86_64 | 对IvorySQL的即时编译支持 -| 7 | ivorysql4-plperl.x86_64 | 用于IvorySQL的过程语言Perl -| 8 | ivorysql4-plpython3.x86_64 | 用于IvorySQL的过程语言Python3 -| 9 | ivorysql4-pltcl.x86_64 | 用于IvorySQL的过程语言Tcl -| 10 | ivorysql4-server.x86_64 | 创建和运行IvorySQL服务器所需的程序 -| 11 | ivorysql4-test.x86_64 | 随IvorySQL发布的测试套件 -| 12 | ivorysql-release.noarch | 瀚高基础软件股份有限公司的Yum源配置RPM包 -|==== - [[docker安装]] == docker安装 ** 从Docker Hub上获取IvorySQL镜像 ``` -$ docker pull ivorysql/ivorysql:4.5-ubi8 +$ docker pull ivorysql/ivorysql:5.0-ubi8 ``` ** 运行IvorySQL ``` -$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:4.5-ubi8 +$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:5.0-ubi8 ``` -e参数说明 |==== @@ -94,7 +73,7 @@ $ sudo dnf install -y lz4 libicu libxslt python3 ``` ** 获取rpm包 ``` -$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_4.5/IvorySQL-4.5-a50789d-20250304.x86_64.rpm +$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.x86_64.rpm ``` ** 安装rpm包 @@ -104,7 +83,7 @@ $ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_4.5/ ``` $ sudo yum --disablerepo=* localinstall *.rpm ``` -数据库将被安装在/opt/IvorySQL-4.5/路径下。 +数据库将被安装在/usr/ivory-5/路径下。 [[源码安装]] == 源码安装 @@ -117,7 +96,7 @@ $ sudo dnf groupinstall -y 'Development Tools' ``` $ git clone https://github.com/IvorySQL/IvorySQL.git $ cd IvorySQL -$ git checkout -b IVORY_REL_4_STABLE origin/IVORY_REL_4_STABLE +$ git checkout -b IVORY_REL_5_STABLE origin/IVORY_REL_5_STABLE ``` ** 配置 @@ -125,7 +104,7 @@ $ git checkout -b IVORY_REL_4_STABLE origin/IVORY_REL_4_STABLE 在IvorySQL目录下,执行以下命令进行配置,请使用--prefix指定安装目录: ``` -$ ./configure --prefix=/usr/local/ivorysql/ivorysql-4 +$ ./configure --prefix=/usr/local/ivorysql/ivorysql-5 ``` ** 编译 @@ -159,14 +138,14 @@ $ sudo apt -y install pkg-config libreadline-dev libicu-dev libldap2-dev uuid-de ** 获取deb包 ``` -$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_4.5/IvorySQL-4.5-a50789d-20250304.amd64.deb +$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-a50789d-20250304.amd64.deb ``` ** 安装deb包 ``` -$ sudo dpkg -i IvorySQL-4.5-a50789d-20250304.amd64.deb +$ sudo dpkg -i IvorySQL-5.0-a50789d-20250304.amd64.deb ``` -数据库将被安装在/opt/IvorySQL-4.5/路径下。 +数据库将被安装在/usr/ivory-5/路径下。 == 启动数据库 参考<>、<>、<<源码安装>>、<>的用户,需要手动启动数据库。 @@ -175,9 +154,9 @@ $ sudo dpkg -i IvorySQL-4.5-a50789d-20250304.amd64.deb + -执行以下命令为安装用户赋权,示例用户为ivorysql,安装目录为/opt/IvorySQL-4.5/: +执行以下命令为安装用户赋权,示例用户为ivorysql,安装目录为/usr/ivory-5/: ``` -$ sudo chown -R ivorysql:ivorysql /opt/IvorySQL-4.5/ +$ sudo chown -R ivorysql:ivorysql /usr/ivory-5/ ``` [[配置环境变量]] ** 配置环境变量 @@ -186,9 +165,9 @@ $ sudo chown -R ivorysql:ivorysql /opt/IvorySQL-4.5/ 将以下配置写入用户的~/.bash_profile文件并使用source命令该文件使环境变量生效: ``` -PATH=/opt/IvorySQL-4.5/bin:$PATH +PATH=/usr/ivory-5/bin:$PATH export PATH -PGDATA=/opt/IvorySQL-4.5/data +PGDATA=/usr/ivory-5/data export PGDATA ``` ``` @@ -197,8 +176,8 @@ $ source ~/.bash_profile ** 数据库初始化 ``` -$ mkdir /opt/IvorySQL-4.5/data -$ initdb -D /opt/IvorySQL-4.5/data +$ mkdir /usr/ivory-5/data +$ initdb -D /usr/ivory-5/data ``` .... 其中-D参数用来指定数据库的数据目录。更多参数使用方法,请使用initdb --help命令获取。 @@ -207,16 +186,16 @@ $ initdb -D /opt/IvorySQL-4.5/data ** 启动数据库服务 ``` -$ pg_ctl -D /opt/IvorySQL-4.5/data -l ivory.log start +$ pg_ctl -D /usr/ivory-5/data -l ivory.log start ``` -其中-D参数用来指定数据库的数据目录,如果<<配置环境变量>> 配置了PGDATA,则该参数可以省略。-l参数用来指定日志目录。更多参数使用方法,请使用pg_ctl --help命令获取。 +其中-D参数用来指定数据库的数据目录,如果<<配置环境变量>> 配置了PGDATA,则该参数可以省略。-l参数用来指定日志文件。更多参数使用方法,请使用pg_ctl --help命令获取。 查看确认数据库启动成功: ``` $ ps -ef | grep postgres -ivorysql 130427 1 0 02:45 ? 00:00:00 /opt/IvorySQL-4.5/bin/postgres -D /opt/IvorySQL-4.5/data +ivorysql 130427 1 0 02:45 ? 00:00:00 /usr/ivory-5/bin/postgres -D /usr/ivory-5/data ivorysql 130428 130427 0 02:45 ? 00:00:00 postgres: checkpointer ivorysql 130429 130427 0 02:45 ? 00:00:00 postgres: background writer ivorysql 130431 130427 0 02:45 ? 00:00:00 postgres: walwriter @@ -230,7 +209,7 @@ ivorysql 130445 130274 0 02:45 pts/1 00:00:00 grep --color=auto postgres psql连接数据库: ``` $ psql -d -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# @@ -254,8 +233,7 @@ TIP: Docker运行IvorySQL时,需要添加额外参数,参考:psql -d ivory 执行以下命令依次卸载: ``` -$ sudo dnf remove -y IvorySQL-4.5 -$ sudo rpm -e ivorysql-release-4.2-1.noarch +$ sudo dnf remove -y ivorysql5-5.0 ``` === docker安装的卸载 @@ -264,15 +242,15 @@ $ sudo rpm -e ivorysql-release-4.2-1.noarch ``` $ docker stop ivorysql $ docker rm ivorysql -$ docker rmi ivorysql/ivorysql:4.5-ubi8 +$ docker rmi ivorysql/ivorysql:5.0-ubi8 ``` === rpm安装的卸载 执行以下命令卸载并清理文件夹: ``` -$ sudo yum remove --disablerepo=* ivorysql4\* -$ sudo rm -rf /opt/IvorySQL-4.5 +$ sudo yum remove --disablerepo=* ivorysql5\* +$ sudo rm -rf /usr/ivory-5 ``` === 源码安装的卸载 @@ -281,13 +259,13 @@ $ sudo rm -rf /opt/IvorySQL-4.5 ``` $ sudo make uninstall $ make clean -$ sudo rm -rf /opt/IvorySQL-4.5 +$ sudo rm -rf /usr/ivory-5 ``` === deb安装的卸载 执行以下命令卸载数据库并清理文件夹: ``` -$ sudo dpkg -P IvorySQL-4.5 -$ sudo rm -rf /opt/IvorySQL-4.5 +$ sudo dpkg -P IvorySQL-5.0 +$ sudo rm -rf /usr/ivory-5 ``` diff --git a/CN/modules/ROOT/pages/master/4.2.adoc b/CN/modules/ROOT/pages/master/4.2.adoc index 70ac6bf..67a2ec8 100644 --- a/CN/modules/ROOT/pages/master/4.2.adoc +++ b/CN/modules/ROOT/pages/master/4.2.adoc @@ -47,7 +47,7 @@ host all all 0.0.0.0/0 trust host replication all 0.0.0.0/0 trust ``` [CAUTION] -示例中的pg_hba的配置,仅做为demo用来测试,这种配置会导致数据库密码失效,请根据环境实际情况进行配置 +示例中的pg_hba的配置,仅作为demo用来测试,这种配置会导致数据库密码失效,请根据环境实际情况进行配置 === 重启主节点数据库服务 ``` @@ -72,7 +72,7 @@ $ sudo systemctl stop firewalld === 搭建流复制 在备节点上执行以下命令,创建一个主节点的基础备份,即搭建流复制: ``` -$ sudo pg_basebackup -F p -P -X fetch -R -h -p -U ivorysql -D /usr/local/ivorysql/ivorysql-4/data +$ sudo pg_basebackup -F p -P -X fetch -R -h -p -U ivorysql -D /usr/local/ivorysql/ivorysql-5/data ``` - -h为主节点ip; - -p为主节点数据库端口号,默认为5432; @@ -85,9 +85,9 @@ $ sudo pg_basebackup -F p -P -X fetch -R -h -p -U iv 将以下配置写入~/.bash_profile文件: ``` -PATH=/usr/local/ivorysql/ivorysql-4/bin:$PATH +PATH=/usr/local/ivorysql/ivorysql-5/bin:$PATH export PATH -PGDATA=/usr/local/ivorysql/ivorysql-4/data +PGDATA=/usr/local/ivorysql/ivorysql-5/data export PGDATA ``` source该文件使环境变量生效: @@ -97,7 +97,7 @@ $ source ~/.bash_profile === 启动备节点数据库服务 ``` -$ pg_ctl -D /usr/local/ivorysql/ivorysql-4/data start +$ pg_ctl -D /usr/local/ivorysql/ivorysql-5/data start ``` == 集群的使用 @@ -118,7 +118,7 @@ ivorysql 6567 6139 0 21:54 ? 00:00:00 postgres: walreceiver streaming 在主节点上psql连接数据库,并查看集群状态: ``` $ psql -d ivorysql -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# select * from pg_stat_replication; @@ -139,7 +139,7 @@ xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | 例如,在主节点创建一个新的数据库test,并在主节点进行查询: ``` $ psql -d ivorysql -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# create database test; @@ -159,7 +159,7 @@ ivorysql=# \l 在备节点查询: ``` $ psql -d ivorysql -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# \l diff --git a/CN/modules/ROOT/pages/master/4.3.adoc b/CN/modules/ROOT/pages/master/4.3.adoc index e1a1723..54a0a5a 100644 --- a/CN/modules/ROOT/pages/master/4.3.adoc +++ b/CN/modules/ROOT/pages/master/4.3.adoc @@ -1638,7 +1638,7 @@ SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C"; ==== 标量子查询 -一个标量子查询是一种圆括号内的普通 `SELECT` 查询,它刚好返回一行一列(关于书写查询可见 http://www.postgresql.org/docs/17/queries.html[第 7 章])。`SELECT`查询被执行并且该单一返回值被使用在周围的值表达式中。将一个返回超过一行或一列的查询作为一个标量子查询使用是一种错误(但是如果在一次特定执行期间该子查询没有返回行则不是错误,该标量结果被当做为空)。该子查询可以从周围的查询中引用变量,这些变量在该子查询的任何一次计算中都将作为常量。对于其他涉及子查询的表达式还可见 http://www.postgresql.org/docs/17/functions-subquery.html[第 9.23 节]。 +一个标量子查询是一种圆括号内的普通 `SELECT` 查询,它刚好返回一行一列(关于书写查询可见 http://www.postgresql.org/docs/17/queries.html[第 7 章])。`SELECT`查询被执行并且该单一返回值被使用在周围的值表达式中。将一个返回超过一行或一列的查询作为一个标量子查询使用是一种错误(但是如果在一次特定执行期间该子查询没有返回行则不是错误,该标量结果被当作为空)。该子查询可以从周围的查询中引用变量,这些变量在该子查询的任何一次计算中都将作为常量。对于其他涉及子查询的表达式还可见 http://www.postgresql.org/docs/17/functions-subquery.html[第 9.23 节]。 例如,下列语句会寻找每个州中最大的城市人口: @@ -1971,10 +1971,6 @@ SELECT concat_lower_or_upper('Hello', 'World', uppercase => true); == Oracle兼容功能 -**详见:** - -- [GUC变量](https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/v4.5/15) - === 更改表 ==== 语法 @@ -1989,10 +1985,10 @@ action: | DROP [ COLUMN ] ( column_name [, ... ] ) add_coldef: - cloumn_name data_type + column_name data_type modify_coldef: - cloumn_name data_type alter_using + column_name data_type alter_using alter_using: USING expression @@ -2002,7 +1998,7 @@ alter_using: `name` 表名. -`cloumn_name` 列名. +`column_name` 列名. `data_type` 列类型. @@ -3549,7 +3545,7 @@ DETAIL: Key (b)=(11) already exists. 将新表附加到具有全局唯一索引的分区表时,系统将对所有现有分区进行重复检查。 如果在现有分区中发现与附加表中的元组匹配的重复项,则会引发错误并且附加失败。 -附加需要所有现有分区上的共享锁(sharedlock)。 如果其中一个分区正在进行并发 INSERT,则附加将等待它先完成。 这可以在未来的版本中改进 +附加需要所有现有分区上的共享锁(SHARE LOCK)。 如果其中一个分区正在进行并发 INSERT,则附加将等待它先完成。 这可以在未来的版本中改进 ==== 示例 diff --git a/CN/modules/ROOT/pages/master/4.4.adoc b/CN/modules/ROOT/pages/master/4.4.adoc index 6171dfb..2eb6f51 100644 --- a/CN/modules/ROOT/pages/master/4.4.adoc +++ b/CN/modules/ROOT/pages/master/4.4.adoc @@ -75,7 +75,7 @@ mv /usr/local/pgsql /usr/local/pgsql.old 8.最后利用新版本的psql命令还原数据: ``` -/usr/local/pqsql/bin/psql -d postgres -f outputfile +/usr/local/pgsql/bin/psql -d postgres -f outputfile ``` 为了减少停机时间,可以将新版本的IvorySQL安装到另一个目录,同时使用不同的端口启动服务。然后同时执行数据库的导出和导入: @@ -88,7 +88,61 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 === 利用pg_upgrade 工具进行升级 -pg_upgrade 工具可以支持IvorySQL跨版本的就地升级。 升级可以在数分钟内被执行,特别是使用--link模式时。它要求和上面的pg_dumpall相似的步骤,例如启动/停止 服务器、运行initdb。pg_upgrade https://www.postgresql.org/docs/current/pgupgrade.html[文档]概述了所需的步骤。 +pg_upgrade 工具是PostgreSQL 内置的跨版本升级工具,能够对数据库就地升级,不需要执行导出和导入操作。IvorySQL源自于PG,因此也能够使用pg_upgrade 工具进行大版本升级。 下面简要介绍一下CentOS8平台上如何使用 pg_upgrade 将IvorySQL升级到最新的5.0版本。 + +pg_upgrade 提供了升级前的兼容性检查(-c 或者 --check 选项)功能,可以发现插件、数据类型不兼容等问题。如果指定了--link 选项,新版本服务可以直接使用原有的数据库文件而不需要执行复制,通常可以在几分钟内完成升级操作。 + +常用的参数包括: + +* -b bindir,--old-bindir=bindir:旧的 IvorySQL 可执行文件目录; +* -B bindir,--new-bindir=bindir:新的 IvorySQL 可执行文件目录; +* -d configdir,--old-datadir=configdir:旧版本的数据目录; +* -D configdir,--new-datadir=configdir:新版本的数据目录; +* -c,--check:只检查升级兼容性,不更改任何数据; +* -k,--link:硬链接方式升级; + +升级准备: + +首先停止旧版本的IvorySQL4.6数据库: +``` +/usr/ivory-4/bin/pg_ctl -D ./data stop +``` +然后安装新版本的IvorySQL5.0数据库: +``` +dnf install -y ivorysql5-5.0 +``` +初始化新版IvorySQL5.0数据目录: +``` +/usr/ivory-5/bin/initdb -D ./data +``` +检查版本兼容性: +``` +/usr/ivory-5/bin/pg_upgrade --old-datadir=/home/ivorysql/test/4.6/data --new-datadir=/home/ivorysql/test/5.0/data --old-bindir=/usr/ivory-4/bin/ --new-bindir=/usr/ivory-5/bin/ --check +``` +最后出现 “Clusters are compatible” 表明两个版本之间的数据不存在兼容性问题,可以进行升级。 + +正式升级: +``` +/usr/ivory-5/bin/pg_upgrade --old-datadir=/home/ivorysql/test/4.6/data --new-datadir=/home/ivorysql/test/5.0/data --old-bindir=/usr/ivory-4/bin/ --new-bindir=/usr/ivory-5/bin/ +``` +看到 Upgrade Complete 说明升级已经顺利完成。 + +更新统计信息: + +pg_upgrade 会创建新的系统表,并重用旧的数据进行升级,统计信息并不会随升级过程迁移,所以在启用新版本之前,应该首先重新收集统计信息,避免没有统计信息导致错误的查询计划。 +启动新版本数据库 +``` +/usr/ivory-5/bin/pg_ctl -D ./data -l logfile start +``` +手动运行vacuum命令 +``` +vacuum --all --analyze-in-stage -h 127.0.0.1 -p 1521 +``` +升级后清理 +``` +rm -rf /home/ivorysql/test/4.6/data +``` +pg_upgrade https://www.postgresql.org/docs/current/pgupgrade.html[文档]概述了上述所需的步骤。 === 通过复制升级数据 @@ -99,7 +153,7 @@ pg_upgrade 工具可以支持IvorySQL跨版本的就地升级。 升级可以在 == 管理IvorySQL版本 -IvorySQL基于PostgreSQL开发,版本更新频率与PostgreSQL版本更新频率保持一致,每年更新一个大版本,每季度更新一个小版本。IvorySQL目前发布的版本有1.0到4.5,分别基于PostgreSQL 14.0到17.5进行开发,最新版本为IvorySQL 4.5,基于PostgreSQL 17.5进行开发。IvorySQL 的所有版本全部都做到了向下兼容。相关版本特性可以查看 https://www.ivorysql.org/zh-CN/releases-page[官网]。 +IvorySQL基于PostgreSQL开发,版本更新频率与PostgreSQL版本更新频率保持一致,每年更新一个大版本,每季度更新一个小版本。IvorySQL目前发布的版本有1.0到5.0,分别基于PostgreSQL 14.0到18.0进行开发,最新版本为IvorySQL 5.0,基于PostgreSQL 18.0进行开发。IvorySQL 的所有版本全部都做到了向下兼容。相关版本特性可以查看 https://www.ivorysql.org/zh-CN/releases-page[官网]。 == 管理IvorySQL数据库访问 @@ -933,7 +987,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2 ORDER BY t1.fivethous; QUERY PLAN -------------------------------------------------------------------​-------------------------------------------------------------------​------ - Sort (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1) + Sort (cost=717.34..718.09 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1) Sort Key: t1.fivethous Sort Method: quicksort Memory: 77kB -> Hash Join (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1) diff --git a/CN/modules/ROOT/pages/master/4.5.adoc b/CN/modules/ROOT/pages/master/4.5.adoc index 49b6cb6..bcb6ddd 100644 --- a/CN/modules/ROOT/pages/master/4.5.adoc +++ b/CN/modules/ROOT/pages/master/4.5.adoc @@ -94,7 +94,8 @@ DBI,Database Independent Interface,是 Perl 语言连接数据库的接口 ``` export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1 -# tar -zxvf DBD-Oracle-1.76.tar.gz # source /home/postgres/.bashrc +# tar -zxvf DBD-Oracle-1.76.tar.gz +# source /home/postgres/.bashrc # cd DBD-Oracle-1.76 # perl Makefile.PL # make && make install @@ -448,7 +449,7 @@ $ createdb orcl $ psql -psql (17.5) +psql (18.0) Type "help" for help. @@ -477,7 +478,7 @@ ivorysql=# ``` $ psql orcl -psql (17.5) +psql (18.0) Type "help" for help. diff --git a/CN/modules/ROOT/pages/master/4.6.1.adoc b/CN/modules/ROOT/pages/master/4.6.1.adoc new file mode 100644 index 0000000..1f91453 --- /dev/null +++ b/CN/modules/ROOT/pages/master/4.6.1.adoc @@ -0,0 +1,243 @@ + +:sectnums: +:sectnumlevels: 5 + += k8s部署单机容器和高可用集群 + +== 单机容器 +进入k8s集群的master节点上,创建名为ivorysql的namespace +``` +[root@k8s-master ~]# kubectl create ns ivorysql +``` + +下载最新docker_library代码 +``` +[root@k8s-master ~]# git clone https://github.com/IvorySQL/docker_library.git +``` + +进入单机目录 +``` +[root@k8s-master ~]# cd docker_library/k8s-cluster/single +[root@k8s-master single]# vim statefulset.yaml #根据个人环境自行修改statefulset中的pvc信息及数据库密码 +``` + +使用statefulset.yaml创建一个单机pod +``` +[root@k8s-master single]# kubectl apply -f statefulset.yaml +service/ivorysql-svc created +statefulset.apps/ivorysql created +``` + +等待pod创建成功 +``` +[root@k8s-master single]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-0 0/1 ContainerCreating 0 47s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-svc NodePort 10.108.178.236 5432:32106/TCP,1521:31887/TCP 47s + +NAME READY AGE +statefulset.apps/ivorysql 0/1 47s +[root@k8s-master single]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-0 1/1 Running 0 2m39s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-svc NodePort 10.108.178.236 5432:32106/TCP,1521:31887/TCP 2m39s + +NAME READY AGE +statefulset.apps/ivorysql 1/1 2m39s +``` + +psql连接IvorySQL的PG端口 +``` +[root@k8s-master single]# psql -U ivorysql -p 32106 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +ivorysql=# exit +``` + +psql连接IvorySQL的Oracle端口 +``` +[root@k8s-master single]# psql -U ivorysql -p 31887 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) +``` + +卸载单机容器 +``` +[root@k8s-master single]# kubectl delete -f statefulset.yaml +``` + +== 高可用集群 + +进入k8s集群的master节点上,创建名为ivorysql的namespace +``` +[root@k8s-master ~]# kubectl create ns ivorysql +``` + +下载最新docker_library代码 +``` +[root@k8s-master ~]# git clone https://github.com/IvorySQL/docker_library.git +``` + +进入高可用集群目录 +``` +[root@k8s-master ~]# cd docker_library/k8s-cluster/ha-cluster/helm_charts +[root@k8s-master single]# vim values.yaml #根据个人环境自行修改values.yaml中的pvc信息及集群规模等信息,数据库密码查看templates/secret.yaml并自行修改。 +``` + +使用 https://helm.sh/docs/intro/install/[Helm] 命令部署高可用集群 +``` +[root@k8s-master helm_charts]# helm install ivorysql-ha-cluster -n ivorysql . +NAME: ivorysql-ha-cluster +LAST DEPLOYED: Wed Sep 10 09:45:36 2025 +NAMESPACE: ivorysql +STATUS: deployed +REVISION: 1 +TEST SUITE: None +[root@k8s-master helm_charts]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-patroni-hac-0 1/1 Running 0 42s +pod/ivorysql-patroni-hac-1 0/1 Running 0 18s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-patroni-hac NodePort 10.96.119.203 5432:32391/TCP,1521:32477/TCP 42s +service/ivorysql-patroni-hac-config ClusterIP None 42s +service/ivorysql-patroni-hac-pods ClusterIP None 42s +service/ivorysql-patroni-hac-repl NodePort 10.100.122.0 5432:30111/TCP,1521:32654/TCP 42s + +NAME READY AGE +statefulset.apps/ivorysql-patroni-hac 1/3 42s +``` + +等待所有 Pod 进入“Running”(运行中)状态,即表示集群已部署成功。 +``` +[root@k8s-master helm_charts]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-patroni-hac-0 1/1 Running 0 88s +pod/ivorysql-patroni-hac-1 1/1 Running 0 64s +pod/ivorysql-patroni-hac-2 1/1 Running 0 41s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-patroni-hac NodePort 10.96.119.203 5432:32391/TCP,1521:32477/TCP 88s +service/ivorysql-patroni-hac-config ClusterIP None 88s +service/ivorysql-patroni-hac-pods ClusterIP None 88s +service/ivorysql-patroni-hac-repl NodePort 10.100.122.0 5432:30111/TCP,1521:32654/TCP 88s +NAME READY AGE +statefulset.apps/ivorysql-patroni-hac 3/3 88s +``` + +使用psql连接集群主节点的PG、Oracle端口 +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 32391 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + f +(1 row) + +ivorysql=# exit +``` +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 32477 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + f +(1 row) + +ivorysql=# +``` + +使用psql连接集群备节点的PG、Oracle端口 +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 30111 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + t +(1 row) + +ivorysql=# exit +``` +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 32654 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + t +(1 row) + +ivorysql=# +``` + +卸载高可用集群 +``` +[root@k8s-master helm_charts]# helm uninstall ivorysql-ha-cluster -n ivorysql +``` +删除PVC +``` +[root@k8s-master helm_charts]# kubectl delete pvc ivyhac-config-ivorysql-patroni-hac-0 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc ivyhac-config-ivorysql-patroni-hac-1 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc ivyhac-config-ivorysql-patroni-hac-2 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc pgdata-ivorysql-patroni-hac-0 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc pgdata-ivorysql-patroni-hac-1 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc pgdata-ivorysql-patroni-hac-2 -n ivorysql +``` \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/4.6.2.adoc b/CN/modules/ROOT/pages/master/4.6.2.adoc new file mode 100644 index 0000000..f47a894 --- /dev/null +++ b/CN/modules/ROOT/pages/master/4.6.2.adoc @@ -0,0 +1,2177 @@ +:sectnums: +:sectnumlevels: 5 += IvorySQL Operator部署IvorySQL + +== Operator安装 + +. Fork https://github.com/IvorySQL/ivory-operator[ivory-operator 仓库] 并克隆到本地: ++ +[source,bash,subs="attributes+"] +---- +YOUR_GITHUB_UN="" +git clone --depth 1 "git@github.com:${YOUR_GITHUB_UN}/ivory-operator.git" +cd ivory-operator +---- + +. 执行以下命令完成安装: ++ +[source,bash] +---- +kubectl apply -k examples/kustomize/install/namespace +kubectl apply --server-side -k examples/kustomize/install/default +---- + +== 说明 + +在本教程中,我们将基于 `examples/kustomize/ivory` 目录中的示例进行构建。 + +当引用 YAML 清单中的嵌套对象时,我们将使用类似 `kubectl explain` 的 `.` 格式。例如,对于以下 YAML 文件: + +[source,yaml] +---- +spec: + hippos: + appetite: huge +---- + +我们会用 `spec.hippos.appetite` 来表示最深层的元素。 + +`kubectl explain` 是一个非常有用的命令。你可以使用它来查看 `ivorycluster.ivory-operator.ivorysql.org` 自定义资源定义(CRD)的结构: + +[source,bash] +---- +kubectl explain ivorycluster +---- + +== 创建一个 Ivory 集群 + +[#create] +=== 创建 + +创建一个 Ivory 集群非常简单。使用 `examples/kustomize/ivory` 目录中的示例,只需运行: + +[source,bash] +---- +kubectl apply -k examples/kustomize/ivory +---- + +IVYO 将在 `ivory-operator` 命名空间中创建一个名为 `hippo` 的简单 Ivory 集群。你可以通过以下命令跟踪 Ivory 集群的状态: + +[source,bash] +---- +kubectl -n ivory-operator describe ivoryclusters.ivory-operator.ivorysql.org hippo +---- + +你也可以使用以下命令跟踪 Ivory Pod 的状态: + +[source,bash] +---- +kubectl -n ivory-operator get pods \ + --selector=ivory-operator.ivorysql.org/cluster=hippo,ivory-operator.ivorysql.org/instance +---- + +[#what-just-happened] +==== 发生了什么? + +IVYO 根据 `examples/kustomize/ivory` 目录中的 Kustomize 清单信息创建了 Ivory 集群。让我们通过查看 `examples/kustomize/ivory/ivory.yaml` 文件来更好地理解发生了什么: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +当我们运行 `kubectl apply` 命令时,实际上是在 Kubernetes 中创建了一个 `ivorycluster` 自定义资源。IVYO 检测到新增的 `ivorycluster` 资源后,开始创建在 Kubernetes 中运行 Ivory 所需的所有对象! + +还发生了什么?IVYO 从 `metadata.name` 读取值,为 Ivory 集群命名为 `hippo`。此外,IVYO 通过查看 `spec.image` 和 `spec.backups.pgbackrest.image` 的值,分别确定了 Ivory 和 pgBackRest 使用的容器镜像。`spec.postgresVersion` 的值也很重要,它帮助 IVYO 跟踪你使用的 Ivory 主版本。 + +IVYO 通过清单中的 `spec.instances` 部分知道要创建多少个 Ivory 实例。虽然 `name` 是可选的,但我们选择将其命名为 `instance1`。我们也可以在集群初始化期间创建多个副本和实例,但稍后在我们讨论 https://github.com/IvonySQL/ivory-operator/blob/master/docs/content/tutorial/high-availability.md[如何扩展并创建高可用 Ivory 集群] 时会详细介绍。 + +`ivorycluster` 自定义资源中非常重要的一部分是 `dataVolumeClaimSpec` 部分。它描述了 Ivory 实例将使用的存储,建模自 https://kubernetes.io/docs/concepts/storage/persistent-volumes/[Persistent Volume Claim]。如果你没有提供 `spec.instances.dataVolumeClaimSpec.storageClassName`,则将使用 Kubernetes 环境中的默认存储类。 + +作为创建 Ivory 集群的一部分,我们还指定了备份存档的信息。IVYO 使用 https://pgbackrest.org/[pgBackRest],这是一个开源的备份与恢复工具,专为处理 TB 级备份而设计。在集群初始化期间,我们可以指定备份和归档(https://www.postgresql.org/docs/current/wal-intro.html[预写日志或 WAL])的存储位置。我们将在本教程的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md[灾难恢复] 部分更深入地讨论 `ivorycluster` 规范的这一部分,并了解如何将备份存储在 Amazon S3、Google GCS 和 Azure Blob Storage 中。 + +[#troubleshooting] +=== 故障排查 + +[#pods-stuck-pending] +==== IvorySQL / pgBackRest Pod 处于 `Pending` 状态 + +最常见的原因是 PVC 未绑定。请确保你在任何 `volumeClaimSpec` 中正确设置了存储选项。你可以随时更新设置并使用 `kubectl apply` 重新应用更改。 + +还要确保你有足够的持久卷可用:你的 Kubernetes 管理员可能需要配置更多持久卷。 + +如果你使用的是 OpenShift,可能需要将 `spec.openshift` 设置为 `true`。 + +== 连接到 Ivory 集群 + +创建 Ivory 集群是一回事,连接到它又是另一回事。让我们看看 IVYO 如何让连接 Ivory 集群变得简单! + +[#background] +=== 背景:Service、Secret 与 TLS + +IVYO 会创建一系列 Kubernetes https://kubernetes.io/docs/concepts/services-networking/service/[Service],为访问 Ivory 数据库提供稳定的端点。这些端点让应用程序能够始终如一地连接到数据。要查看可用的 Service,可执行: + +[source,bash] +---- +kubectl -n ivory-operator get svc --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +输出示例: + +.... +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hippo-ha ClusterIP 10.103.73.92 5432/TCP 3h14m +hippo-ha-config ClusterIP None 3h14m +hippo-pods ClusterIP None 3h14m +hippo-primary ClusterIP None 5432/TCP 3h14m +hippo-replicas ClusterIP 10.98.110.215 5432/TCP 3h14m +.... + +大多数 Service 用于集群内部管理,无需关注。连接数据库时,只需关注名为 `hippo-primary` 的 Service。得益于 IVYO,你甚至无需手动指定它——这些信息已被写入 Secret! + +集群初始化时,IVYO 会引导创建一个数据库和用户,供应用程序使用。相关信息保存在名为 `-pguser-` 的 Secret 中。对于 `hippo` 集群,该 Secret 名为 `hippo-pguser-hippo`,包含以下键值: + +- `user`:用户账户名 +- `password`:用户密码 +- `dbname`:用户默认可访问的数据库名 +- `host`:数据库主机名(指向主实例的 Service) +- `port`:数据库监听端口 +- `uri`: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING[PostgreSQL 连接 URI],含完整登录信息 +- `jdbc-uri`: https://jdbc.postgresql.org/documentation/use/[PostgreSQL JDBC 连接 URI],供 JDBC 驱动使用 + +所有连接均通过 TLS 进行。IVYO 自带证书中心(CA),支持使用 Ivory 的 `verify-full` SSL 模式,防止窃听与中间人攻击。你也可以稍后使用自定义 CA,详见 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md[自定义集群] 章节。 + +[#modify-service] +==== 修改 Service 类型、NodePort 值与元数据 + +默认情况下,IVYO 部署的 Service 类型为 `ClusterIP`。根据暴露数据库的方式,你可能需要更改为其他 https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types[Service 类型] 或指定 https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport[NodePort 值]。 + +可通过以下字段调整 IVYO 管理的 Service: + +- `spec.service`:控制连接主库的 Service +- `spec.userInterface.pgAdmin.service`:控制 pgAdmin 管理工具的 Service + +例如,将主库 Service 改为 `NodePort` 并指定端口、注解与标签: + +[source,yaml] +---- +spec: + service: + metadata: + annotations: + my-annotation: value1 + labels: + my-label: value2 + type: NodePort + nodePort: 32000 +---- + +重新应用后,再次查看 Service: + +[source,bash] +---- +kubectl -n ivory-operator get svc --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +输出示例: + +.... +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hippo-ha NodePort 10.105.57.191 5432:32000/TCP 48s +hippo-ha-config ClusterIP None 48s +hippo-pods ClusterIP None 48s +hippo-primary ClusterIP None 5432/TCP 48s +hippo-replicas ClusterIP 10.106.18.99 5432/TCP 48s +.... + +查看 `hippo-ha` 的详细信息,顶部将显示自定义注解与标签已生效: + +.... +Name: hippo-ha +Namespace: ivory-operator +Labels: my-label=value2 + ivory-operator.ivorysql.org/cluster=hippo + ivory-operator.ivorysql.org/patroni=hippo-ha +Annotations: my-annotation: value1 +.... + +NOTE: 使用默认 `ClusterIP` 类型时禁止设置 `nodePort`;该值必须在合法范围内且未被占用。此处提供的注解与标签优先级最高。若通过外部暴露 Service 并依赖 TLS 验证,需使用 IVYO 的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[自定义 TLS] 功能。 + +[#connect-app] +=== 连接应用程序 + +本教程以 https://www.keycloak.org/[Keycloak](开源身份管理应用)为例。Keycloak 可部署在 Kubernetes 上,并使用 Ivory 作为数据库。以下示例清单将 Keycloak 连接到已运行的 `hippo` 集群: + +[source,bash] +---- +kubectl apply --filename=- <}}" + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 # <── 新增副本 + dataVolumeClaimSpec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: "{{< param imagePGBackrest >}}" + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi +---- + +应用后稍等片刻,新副本自动初始化。通过以下命令可实时查看实例 Pod: + +[source,bash] +---- +kubectl -n ivory-operator get pods \ + -l ivory-operator.ivorysql.org/cluster=hippo,\ + ivory-operator.ivorysql.org/instance-set +---- + +[#test-ha] +=== 验证集群自愈能力 + +[#test-delete-svc] +==== 测试 1 —— 删除主库 Service + +上一篇《连接集群》提到,应用默认通过 `hippo-primary` Service 读写。我们人为删除它: + +[source,bash] +---- +kubectl -n ivory-operator delete svc hippo-primary +---- + +立刻再查询 Service 列表: + +[source,bash] +---- +kubectl -n ivory-operator get svc \ + -l ivory-operator.ivorysql.org/cluster=hippo +---- + +可见 `hippo-primary` 已被 IVYO **秒级重建**。多数应用凭借重连逻辑几乎无感知。 + +[#test-delete-sts] +==== 测试 2 —— 删除主库 StatefulSet + +首先找到当前主库 Pod 对应的 StatefulSet 名字: + +[source,bash] +---- +PRIMARY_STS=$(kubectl -n ivory-operator get sts \ + -l ivory-operator.ivorysql.org/cluster=hippo,\ + ivory-operator.ivorysql.org/role=master \ + -o jsonpath='{.items[0].metadata.name}') +echo $PRIMARY_STS +---- + +假设输出为 `hippo-instance1-zj5s`,直接删除: + +[source,bash] +---- +kubectl -n ivory-operator delete sts "$PRIMARY_STS" +---- + +再次查看 StatefulSet: + +[source,bash] +---- +kubectl -n ivory-operator get sts \ + -l ivory-operator.ivorysql.org/cluster=hippo +---- + +IVYO 会立即重建被删对象,并自动将原副本重新加入集群。同时,另一实例已被提升为新主: + +[source,bash] +---- +kubectl -n ivory-operator get pods \ + -l ivory-operator.ivorysql.org/role=master \ + -o jsonpath='{.items[0].metadata.labels.ivory-operator\.ivorysql\.org/instance}' +---- + +即使 IVYO 进程短暂离线,Patroni 仍能独立完成故障切换,确保应用读写不中断。 + +[#sync-repl] +=== 同步复制(Synchronous Replication) + +IvorySQL 支持同步复制,可进一步降低事务丢失风险。只需在集群里增加: + +[source,yaml] +---- +spec: + patroni: + dynamicConfiguration: + synchronous_mode: true +---- + +如需强制所有提交都同步到至少一个副本,可再加: + +[source,yaml] +---- + synchronous_mode_strict: true +---- + +NOTE: Patroni 默认“可用性优先”,当同步副本全部失效时会退化为异步;若业务要求**绝对同步**,请启用 `synchronous_mode_strict`,此时无可用同步副本将拒绝写入。 + +[#affinity] +=== 亲和性(Affinity)与反亲和性 + +[#pod-antiaffinity] +==== Pod 反亲和 + +- `preferredDuringSchedulingIgnoredDuringExecution` —— 尽力分散,资源不足时允许同节点 +- `requiredDuringSchedulingIgnoredDuringExecution` —— 强制分散,找不到空闲节点则 Pending + +示例 —— 强制让同一 `instance-set` 的 Pod 落在不同节点: + +[source,yaml] +---- + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/cluster: hippo + ivory-operator.ivorysql.org/instance-set: instance1 +---- + +[#node-affinity] +==== 节点亲和 + +将数据库实例固定在带 `workload-role=db` 标签的节点: + +[source,yaml] +---- + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: workload-role + operator: In + values: ["db"] +---- + +[#topology-spread] +=== Pod 拓扑分布约束(Topology Spread Constraints) + +相比反亲和的“0 或 1”限制,拓扑分布约束可按比例打散,粒度更细。字段模板: + +[source,yaml] +---- +topologySpreadConstraints: +- maxSkew: <整数> + topologyKey: <标签键> + whenUnsatisfiable: + labelSelector: <对象> +---- + +示例 —— 5 个实例 Pod 在 3 节点间尽量均衡: + +[source,yaml] +---- + instances: + - name: instance1 + replicas: 5 + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/instance-set: instance1 +---- + +备份仓库主机(repo-host)也可同理配置,确保多集群场景下仓库 Pod 分散。 + +== 在线调整 Ivory 集群规格 + +业务蒸蒸日上,流量激增,需要给 Ivory 集群扩容,却又担心 resize 造成停机? +IVYO 提供**滚动升级**机制,能在**零感知或毫秒级中断**内完成 CPU、内存、磁盘等所有规格的在线调整。 +继续阅读前,请确保已按上一章《高可用》部署了 **HA 集群**(至少 2 副本)。 + +[#resize-cpu-memory] +=== 垂直调整 CPU / 内存 + +IVYO 把资源声明分散在多处,保持统一语义(与 Kubernetes 原生 `resources` 字段一致),并支持 QoS 类别设置: + +`spec.instances.resources` +  └ Ivory 主容器、init 容器、数据迁移 Job 的 CPU / 内存 + +`spec.instances.sidecars.replicaCertCopy.resources` +  └ 副本证书复制 sidecar + +`spec.backups.pgbackrest.repoHost.resources` +  └ pgBackRest 仓库主机及对应 init / 迁移 Job + +`spec.backups.pgbackrest.sidecars.*.resources` +`spec.backups.pgbackrest.jobs.resources` +`spec.backups.pgbackrest.restore.resources` +`spec.dataSource.ivorycluster.resources` + +示例:把 `hippo` 每个实例上限调整为 2 CPU、4 GiB 内存 + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + instances: + - name: instance1 + replicas: 2 + resources: # <── 新增或修改 + limits: + cpu: "2" + memory: 4Gi + dataVolumeClaimSpec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi +---- + +[source,bash] +kubectl apply -k examples/kustomize/ivory +---- + +实时观察滚动过程(逐 Pod 重建): + +[source,bash] +watch kubectl -n ivory-operator get pods \ + -l ivory-operator.ivorysql.org/cluster=hippo,\ + ivory-operator.ivorysql.org/instance \ + -o custom-columns=NAME:.metadata.name,ROLE:.metadata.labels.ivory-operator\.ivorysql\.org/role,PHASE:.status.phase +---- + +流程解析: +1. 先升级所有 **副本** 实例 → 新 Pod 就绪后旧 Pod 才删除 +2. 执行**受控主从切换**(switchover)→ 应用仅感受到毫秒级重连 +3. 最后升级原主库 → 再次选主完成 + +[#resize-pvc] +=== 在线扩容 PVC(磁盘) + +[#pvc-expansion-supported] +==== 场景 A – StorageClass 允许扩容 + +要求: +- 底层 StorageClass 的 `allowVolumeExpansion=true` +- 只能**增**不能减 + +需要调大的字段: + +- `spec.instances.dataVolumeClaimSpec.resources.requests.storage` (数据目录) +- `spec.backups.pgbackrest.repos[*].volume.volumeClaimSpec...` (备份仓库) + +示例:数据盘 1 GiB → 10 GiB,备份盘 1 GiB → 20 GiB + +[source,yaml] +---- +spec: + instances: + - name: instance1 + dataVolumeClaimSpec: + resources: + requests: + storage: 10Gi # 1→10 + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + resources: + requests: + storage: 20Gi # 1→20 +---- + +[source,bash] +kubectl apply -k examples/kustomize/ivory +---- + +IVYO 会按 **副本优先、主库最后** 的顺序触发底层 `pvc.spec.resources.requests.storage` 修改,Kubelet 与存储插件完成文件系统在线扩容,**Pod 无需重建**,业务无感知。 + +[#pvc-expansion-unsupported] +==== 场景 B – StorageClass **禁止** 扩容 + +部分公有云早期 StorageClass 或本地盘 CSI 驱动未开启扩容,仍可通过 **“新增大容量实例集 → 切换 → 删除老实例集”** 完成“曲线”扩容。 + +步骤示例: + +1. 保留原 `instance1`(1 GiB),新增 `instance2`(10 GiB) + +[source,yaml] +---- +spec: + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + resources: + requests: + storage: 1Gi + - name: instance2 # 新实例集 + replicas: 2 + dataVolumeClaimSpec: + resources: + requests: + storage: 10Gi +---- + +2. 等待 `instance2` 副本同步追上主库 +3. 提交仅含 `instance2` 的清单,IVYO 将自动: + - 把 `instance2` 某一副本提升为新主 + - 删除 `instance1` 所有 Pod & PVC + - 完成“数据迁移” + +结果:业务未中断,磁盘已换成 10 GiB,**老 PVC 被释放,费用停止计费**。 +反向操作即可“缩容”磁盘(先建小盘实例集 → 切换 → 删大盘)。 + +[#troubleshooting] +=== 常见问题 + +[#pod-unschedulable] +==== Pod 无法调度 + +- 节点剩余资源不满足 `requests` → 扩容节点或降低 requests +- PVC 申请过大 / StorageClass 不存在 → 检查存储类及配额 + +[#pvc-not-expand] +==== PVC 大小未变 + +确认 StorageClass: + +[source,bash] +kubectl get sc -o custom-columns=NAME:.metadata.name,ALLOW Expansion:.allowVolumeExpansion +---- + +若返回 `false` 或空值,请: + +- 换用支持扩容的 StorageClass,或 +- 使用上文“场景 B”实例集替换方案 + +== 自定义 Ivory 配置 + +管理 Ivory 集群中多个实例的诀窍之一是确保所有配置更改都能传播到每个实例。这正是 IVYO 的用武之地:当您为集群进行 Ivory 配置更改时,IVYO 会将其应用到所有 Ivory 实例。 + +例如,在上一步中,我们分别添加了 CPU 和内存限制为 `2.0` 和 `4Gi`。让我们调整一些 Ivory 设置以更好地利用我们的新资源。我们可以在 `spec.patroni.dynamicConfiguration` 部分中进行此操作。以下是一个更新后的示例清单,其中调整了几个设置: + +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + patroni: + dynamicConfiguration: + postgresql: + parameters: + max_parallel_workers: 2 + max_worker_processes: 2 + shared_buffers: 1GB + work_mem: 2MB +---- + +特别是,我们在 `spec` 中添加了以下内容: + +---- +patroni: + dynamicConfiguration: + postgresql: + parameters: + max_parallel_workers: 2 + max_worker_processes: 2 + shared_buffers: 1GB + work_mem: 2MB +---- + +使用以下命令将这些更新应用到您的 Ivory 集群: + +---- +kubectl apply -k examples/kustomize/ivory +---- + +IVYO 将应用这些设置,并在必要时重新启动每个 Ivory 实例。您可以使用 Ivory 的 `SHOW` 命令验证更改是否已生效,例如: + +---- +SHOW work_mem; +---- + +应该会产生类似以下的结果: + +---- + work_mem +---------- + 2MB +---- + +=== 自定义 TLS + +IVYO 中的所有连接都使用 TLS 加密组件之间的通信。IVYO 设置了一个 PKI 和证书颁发机构 (CA),允许您创建可验证的端点。但是,您可能希望根据组织要求引入不同的 TLS 基础设施。好消息是:IVYO 允许您这样做! + +==== 如何自定义 TLS + +IVYO 有几个不同的 TLS 端点可以自定义,包括 Ivory 集群的端点以及控制 Ivory 实例之间如何进行身份验证的端点。让我们看看如何通过定义以下内容来自定义 TLS: + +* 一个 `spec.customTLSSecret`,用于标识集群并加密通信;以及 +* 一个 `spec.customReplicationTLSSecret`,用于复制身份验证。 + +要自定义 Ivory 集群的 TLS,您需要在 Ivory 集群的命名空间中创建两个 Secret。其中一个 Secret 将是 `customTLSSecret`,另一个将是 `customReplicationTLSSecret`。这两个 Secret 都包含要使用的 TLS 密钥(`tls.key`)、TLS 证书(`tls.crt`)和 CA 证书(`ca.crt`)。 + +注意:如果提供了 `spec.customTLSSecret`,则**必须**也提供 `spec.customReplicationTLSSecret`,并且两者都必须包含相同的 `ca.crt`。 + +自定义 TLS 和自定义复制 TLS Secret 应包含以下字段(如果您无法控制 Secret 的 `data` 中的字段名称,请参见下面的解决方法): + +---- +data: + ca.crt: + tls.crt: + tls.key: +---- + +例如,如果您本地计算机上存储有名为 `ca.crt`、`hippo.key` 和 `hippo.crt` 的文件,您可以运行以下命令从这些文件创建 Secret: + +---- +kubectl create secret generic -n ivory-operator hippo-cluster.tls \ + --from-file=ca.crt=ca.crt \ + --from-file=tls.key=hippo.key \ + --from-file=tls.crt=hippo.crt +---- + +创建 Secret 后,您可以在 `ivorycluster.ivory-operator.ivorysql.org` 自定义资源中指定自定义 TLS Secret。例如,如果您创建了 `hippo-cluster.tls` Secret 和 `hippo-replication.tls` Secret,您可以将它们添加到您的 Ivory 集群中: + +---- +spec: + customTLSSecret: + name: hippo-cluster.tls + customReplicationTLSSecret: + name: hippo-replication.tls +---- + +如果您无法控制 Secret 中的键值对,您可以创建一个映射来告诉 Ivory Operator 哪个键保存了预期的值。它看起来类似于以下内容: + +---- +spec: + customTLSSecret: + name: hippo.tls + items: + - key: + path: tls.crt + - key: + path: tls.key + - key: + path: ca.crt +---- + +例如,如果 `hippo.tls` Secret 中的 `tls.crt` 位于名为 `hippo-tls.crt` 的键中,`tls.key` 位于名为 `hippo-tls.key` 的键中,`ca.crt` 位于名为 `hippo-ca.crt` 的键中,那么您的映射将如下所示: + +---- +spec: + customTLSSecret: + name: hippo.tls + items: + - key: hippo-tls.crt + path: tls.crt + - key: hippo-tls.key + path: tls.key + - key: hippo-ca.crt + path: ca.crt +---- + +注意:尽管自定义 TLS 和自定义复制 TLS Secret 共享相同的 `ca.crt`,但它们不共享相同的 `tls.crt`: + +* 您的 `spec.customTLSSecret` TLS 证书应具有与主服务名称匹配的通用名称 (CN) 设置。这是集群名称后缀为 `-primary` 的名称。例如,对于我们的 `hippo` 集群,这将是 `hippo-primary`。 +* 您的 `spec.customReplicationTLSSecret` TLS 证书应具有与预设复制用户 `_ivoryrepl` 匹配的通用名称 (CN) 设置。 + +与其他更改一样,您可以使用 `kubectl apply` 推出 TLS 自定义设置。 + +=== 标签 + +有几种方法可以将您自己的自定义 Kubernetes https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[标签] 添加到您的 Ivory 集群。 + +- 集群:您可以通过编辑自定义资源的 `spec.metadata.labels` 部分将标签应用于集群中的任何 IVYO 托管对象。 +- Ivory:您可以通过编辑 `spec.instances.metadata.labels` 将标签应用于 Ivory 实例集及其对象。 +- pgBackRest:您可以通过编辑 `ivoryclusters.spec.backups.pgbackrest.metadata.labels` 将标签应用于 pgBackRest 及其对象。 + +=== 注解 + +有几种方法可以将您自己的自定义 Kubernetes https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[注解] 添加到您的 Ivory 集群。 + +- 集群:您可以通过编辑自定义资源的 `spec.metadata.annotations` 部分将注解应用于集群中的任何 IVYO 托管对象。 +- Ivory:您可以通过编辑 `spec.instances.metadata.annotations` 将注解应用于 Ivory 实例集及其对象。 +- pgBackRest:您可以通过编辑 `spec.backups.pgbackrest.metadata.annotations` 将注解应用于 pgBackRest 及其对象。 + +=== Pod 优先级类 + +IVYO 允许您使用 https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/[pod 优先级类] 通过设置 Ivory 集群上的 `priorityClassName` 字段来指示 pod 的相对重要性。这可以通过以下方式完成: + +- 实例:优先级是按实例集定义的,并应用于该实例集中的所有 Pod,方法是编辑自定义资源的 `spec.instances.priorityClassName` 部分。 +- 专用仓库主机:在规范的 repoHost 部分下定义的优先级通过编辑自定义资源的 `spec.backups.pgbackrest.repoHost.priorityClassName` 部分应用于专用仓库主机。 +- 备份(手动和计划):优先级在 `spec.backups.pgbackrest.jobs.priorityClassName` 部分下定义,并将该优先级应用于所有 pgBackRest 备份作业(手动和计划)。 +- 还原(数据源或就地):通过编辑自定义资源的 `spec.dataSource.ivorycluster.priorityClassName` 部分为“数据源”还原或就地还原定义优先级。 +- 数据迁移:规范中第一个实例集(数组位置 0)定义的优先级用于 PGDATA 和 WAL 迁移作业。pgBackRest 仓库迁移作业将使用应用于 repoHost 的优先级类。 + +=== 独立的 WAL PVC + +IvorySQL 通过将更改存储在其https://www.postgresql.org/docs/current/wal-intro.html[预写日志 (WAL)] 中来提交事务。由于访问和使用 WAL 文件的方式通常与数据文件不同,并且在高性能情况下,可能需要将 WAL 文件放在单独的存储卷上。使用 IVYO,可以通过在您的 ivorycluster 规范中为您所需的实例添加 `walVolumeClaimSpec` 块来实现,无论是在创建集群时还是之后的任何时间: + +---- +spec: + instances: + - name: instance + walVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +稍后可以通过从实例中移除 `walVolumeClaimSpec` 部分来移除此卷。请注意,在更改 WAL 目录时,会小心避免丢失任何 WAL 文件。只有在先前配置的卷上不再有任何 WAL 文件时,IVYO 才会删除 PVC。 + +=== 数据库初始化 SQL + +IVYO 可以在集群创建和初始化过程中为您运行 SQL。IVYO 使用 psql 客户端运行 SQL,因此您可以使用元命令连接到不同的数据库、更改错误处理或设置和使用变量。其功能在 https://www.postgresql.org/docs/current/app-psql.html[psql 文档] 中有所描述。 + +==== 初始化 SQL ConfigMap + +Ivory 集群规范接受对包含您的 init SQL 文件的 ConfigMap 的引用。更新您的集群规范以包括 ConfigMap 名称 `spec.databaseInitSQL.name` 和您的 SQL 文件的数据键 `spec.databaseInitSQL.key`。例如,如果您使用以下命令创建 ConfigMap: + +---- +kubectl -n ivory-operator create configmap hippo-init-sql --from-file=init.sql=/path/to/init.sql +---- + +您可以将以下部分添加到您的 ivorycluster 规范中: + +---- +spec: + databaseInitSQL: + key: init.sql + name: hippo-init-sql +---- + +[NOTE] +==== +ConfigMap 必须与您的 Ivory 集群位于同一命名空间中。 +==== + +在您将 ConfigMap 引用添加到您的规范后,使用 `kubectl apply -k examples/kustomize/ivory` 应用更改。IVYO 将创建您的 `hippo` 集群,并在集群启动后运行您的初始化 SQL。您可以通过检查 Ivory 集群上的 `databaseInitSQL` 状态来验证您的 SQL 是否已运行。在状态设置期间,您的 init SQL 不会再次运行。您可以使用 `kubectl describe` 命令检查集群状态: + +---- +kubectl -n ivory-operator describe ivoryclusters.ivory-operator.ivorysql.org hippo +---- + +[WARNING] +==== +在某些情况下,由于 Kubernetes 处理 ivorycluster 状态的方式,IVYO 可能会多次运行您的 SQL 命令。请确保您在 init SQL 中定义的命令是幂等的。 +==== + +现在 `databaseInitSQL` 已在您的集群状态中定义,请验证数据库对象是否已按预期创建。验证后,我们建议从您的规范中移除 `spec.databaseInitSQL` 字段。从规范中移除该字段也将从集群状态中移除 `databaseInitSQL`。 + +==== PSQL 用法 +IVYO 使用 psql 交互式终端在您的数据库中执行 SQL 语句。语句使用标准输入和文件名标志传递(例如 `psql -f -`)。 + +SQL 语句以超级用户身份在默认维护数据库中执行。这意味着您可以完全控制创建数据库对象、扩展或运行您可能需要的任何 SQL 语句。 + +===== 与用户和数据库管理集成 + +如果您正在创建用户或数据库,请参阅 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/user-management.md[用户/数据库管理] 文档。通过规范的用户管理部分创建的数据库可以在您的初始化 sql 中引用。例如,如果定义了数据库 `zoo`: + +---- +spec: + users: + - name: hippo + databases: + - "zoo" +---- + +您可以通过将以下 `psql` 元命令添加到您的 SQL 来连接到 `zoo`: + +---- +\c zoo +create table t_zoo as select s, md5(random()::text) from generate_Series(1,5) s; +---- + +===== 事务支持 + +默认情况下,`psql` 会在每个 SQL 命令完成时提交它。要将多个命令组合成一个单独的 https://www.postgresql.org/docs/current/tutorial-transactions.html[事务],请使用 https://www.postgresql.org/docs/current/sql-begin.html[`BEGIN`] 和 https://www.postgresql.org/docs/current/sql-commit.html[`COMMIT`] 命令。 + +---- +BEGIN; +create table t_random as select s, md5(random()::text) from generate_Series(1,5) s; +COMMIT; +---- + +===== PSQL 退出代码和数据库 Init SQL 状态 + +`psql` 的退出代码将决定何时设置 `databaseInitSQL` 状态。当 `psql` 返回 `0` 时,状态将被设置,并且不会再次运行 SQL。当 `psql` 返回错误退出代码时,状态将不会被设置。IVYO 将继续尝试执行 SQL,作为其协调循环的一部分,直到 `psql` 正常返回。如果 `psql` 以失败退出,您将需要编辑 ConfigMap 中的文件,以确保您的 SQL 语句将导致成功的 `psql` 返回。对 ConfigMap 进行实时更改的最简单方法是使用以下 `kubectl edit` 命令: + +---- +kubectl -n edit configmap hippo-init-sql +---- + +请务必将所有更改传回您的本地文件。另一个选项是在本地文件中进行更改,并使用 `kubectl --dry-run` 创建模板,并将输出通过管道传输到 `kubectl apply`: + +---- +kubectl create configmap hippo-init-sql --from-file=init.sql=/path/to/init.sql --dry-run=client -o yaml | kubectl apply -f - +---- + +[TIP] +==== +如果您编辑了 ConfigMap 但更改没有显示出来,您可能正在等待 IVYO 协调您的集群。一段时间后,IVYO 将自动协调集群,或者您可以通过对集群应用任何更改来触发协调(例如,使用 `kubectl apply -k examples/kustomize/ivory`)。 +==== + +为了确保 `psql` 在您的 SQL 命令失败时返回失败退出代码,请在您的 SQL 文件中设置 `ON_ERROR_STOP` https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES[变量]: + +---- +\set ON_ERROR_STOP +\echo Any error will lead to exit code 3 +create table t_random as select s, md5(random()::text) from generate_Series(1,5) s; +---- + +== 用户/数据库管理 +IVYO 内置了一些即用型便利功能,用于管理 Ivory 集群中的用户和数据库。然而,您可能有需要创建额外用户、调整用户权限或向集群添加额外数据库的需求。 + +有关 IVYO 中用户和数据库管理工作原理的详细信息,请参阅架构指南中的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/user-management.md[用户管理] 部分。 + +=== 创建新用户 + +您可以通过在 `ivorycluster` 自定义资源中添加以下片段来创建新用户。让我们将其添加到我们的 `hippo` 数据库中: + +---- +spec: + users: + - name: rhino +---- + +现在您可以应用更改,并看到新用户已创建。请注意以下事项: + +- 该用户只能连接到默认的 `ivory` 数据库。 +- 用户不会将任何连接凭据填充到 `hippo-pguser-rhino` Secret 中。 +- 该用户是未特权的。 + +让我们创建一个名为 `zoo` 的新数据库,我们将允许 `rhino` 用户访问该数据库: + +---- +spec: + users: + - name: rhino + databases: + - zoo +---- + +检查 `hippo-pguser-rhino` Secret。您现在应该看到 `dbname` 和 `uri` 字段已填充! + +我们可以通过使用 Ivory 提供的标准 https://www.postgresql.org/docs/current/role-attributes.html[角色属性] 并将它们添加到 `spec.users.options` 来设置角色权限。假设我们希望 rhino 成为超级用户(在授予 Ivory 超级用户权限时要小心!)。您可以将以下内容添加到规范中: + +---- +spec: + users: + - name: rhino + databases: + - zoo + options: "SUPERUSER" +---- + +就这样:我们创建了一个名为 `rhino` 的 Ivory 用户,该用户具有超级用户权限,并且可以访问 `rhino` 数据库(尽管超级用户可以访问所有数据库!)。 + +=== 调整权限 + +假设您想从 `rhino` 中撤销超级用户权限。您可以通过以下方式执行此操作: + +---- +spec: + users: + - name: rhino + databases: + - zoo + options: "NOSUPERUSER" +---- + +如果您想添加多个权限,您可以在 `options` 中用空格分隔每个权限,例如: + +---- +spec: + users: + - name: rhino + databases: + - zoo + options: "CREATEDB CREATEROLE" +---- + +=== 管理 `ivory` 用户 + +默认情况下,IVYO 不允许您访问 `ivory` 用户。但是,您可以通过执行以下操作来访问此帐户: + +---- +spec: + users: + - name: ivory +---- + +这将创建一个模式为 `-pguser-ivory` 的 Secret,其中包含 `ivory` 帐户的凭据。对于我们的 `hippo` 集群,这将是 `hippo-pguser-ivory`。 + +=== 删除用户 + +IVYO 不会自动删除用户:将用户从规范中移除后,它将仍然存在于您的集群中。要删除用户及其所有对象,作为超级用户,您需要在用户拥有对象的每个数据库中运行 https://www.postgresql.org/docs/current/sql-drop-owned.html[`DROP OWNED`],并在您的 Ivory 集群中运行 https://www.postgresql.org/docs/current/sql-droprole.html[`DROP ROLE`]。 + +例如,对于上面的 `rhino` 用户,您将运行以下命令: + +---- +DROP OWNED BY rhino; +DROP ROLE rhino; +---- + +请注意,您可能需要根据对象所有权结构运行 `DROP OWNED BY rhino CASCADE;` —— 请非常小心此命令! + +=== 删除数据库 + +IVYO 不会自动删除数据库:从规范中移除数据库的所有实例后,它将仍然存在于您的集群中。要完全删除数据库,您必须以 Ivory 超级用户身份运行 https://www.postgresql.org/docs/current/sql-dropdatabase.html[`DROP DATABASE`] 命令。 + +例如,要删除 `zoo` 数据库,您将执行以下命令: + +---- +DROP DATABASE zoo; +---- + +== 灾难恢复与克隆 +也许有人不小心删除了 `users` 表。也许你想把生产数据库克隆到降级环境。也许你想演练灾难恢复系统(这很重要!)。 + +无论哪种情况,了解如何使用 IVYO 执行“恢复”操作以便从特定时间点恢复数据,或出于其他目的克隆数据库都很重要。 + +我们来看看如何执行不同类型的恢复操作。首先,让我们了解自定义资源上的核心恢复属性。 + +=== 恢复属性 + +[NOTE] +==== +IVYO 提供了从现有 ivorycluster 或远程云数据源(如 S3、GCS 等)恢复的能力。有关更多信息,请参阅 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/disaster-recovery.md#cloud-based-data-source[从 S3 / GCS / Azure Blob 存储中存储的备份克隆] 部分。 + +请注意,您**不能**同时使用本地 ivorycluster 数据源和远程云数据源;如果同时填写了 `dataSource.ivorycluster` 和 `dataSource.pgbackrest` 字段,本地 ivorycluster 数据源将优先。 +==== + +自定义资源上有几个重要属性需要了解,这些都是恢复过程中的关键。所有这些属性都分组在自定义资源的 spec.dataSource.ivorycluster 部分中。 + +请查看下表,了解每个属性在设置恢复操作时的工作原理。 + +- `spec.dataSource.ivorycluster.clusterName`:您要从中恢复的集群的名称。这对应于另一个 `ivorycluster` 自定义资源的 `metadata.name` 属性。 +- `spec.dataSource.ivorycluster.clusterNamespace`:您要从中恢复的集群的命名空间。当集群存在于不同的命名空间时使用。 +- `spec.dataSource.ivorycluster.repoName`:用于恢复的 `spec.dataSource.ivorycluster.clusterName` 中的 pgBackRest 仓库的名称。可以是 `repo1`、`repo2`、`repo3` 或 `repo4` 之一。仓库必须存在于另一个集群中。 +- `spec.dataSource.ivorycluster.options`:IVYO 允许的任何额外 https://pgbackrest.org/command.html#command-restore[pgBackRest 恢复选项] 或常规选项。例如,您可能希望设置 `--process-max` 以帮助提高大型数据库的性能;但您将无法设置 `--target-action`,因为该选项目前被禁止。(如果存在 `--target`,IVYO 总是将其设置为 `promote`,否则将其留空。) +- `spec.dataSource.ivorycluster.resources`:设置恢复作业的 https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[资源限制和请求] 可以确保其高效运行。 +- `spec.dataSource.ivorycluster.affinity`:自定义 https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[Kubernetes 亲和性] 规则约束恢复作业,使其仅在某些节点上运行。 +- `spec.dataSource.ivorycluster.tolerations`:自定义 https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[Kubernetes 容忍度] 允许恢复作业在 https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[污点] 节点上运行。 + +让我们通过一些示例来了解如何克隆和恢复我们的数据库。 + +=== 克隆 Ivory 集群 + +让我们创建一个我们之前创建的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/create-cluster.md[`hippo`] 集群的克隆。我们知道我们的集群名为 `hippo`(基于其 `metadata.name`),并且我们只有一个名为 `repo1` 的备份仓库。 + +让我们称我们的新集群为 `elephant`。我们可以使用如下所示的清单创建 `hippo` 集群的克隆: + +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: elephant +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +请注意规范中的这一部分: + +---- +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 +---- + +这部分告诉 IVYO 将 `elephant` 集群创建为 `hippo` 集群的独立副本。 + +以上就是克隆 Ivory 集群所需的全部操作!IVYO 将在新的持久卷声明 (PVC) 上创建数据副本,并致力于将集群初始化到规范。很简单! + +=== 执行时间点恢复 (PITR) + +有人删除了用户表吗?您可能希望执行时间点恢复 (PITR) 以将数据库恢复到更改发生之前的状态。幸运的是,IVYO 可以帮助您做到这一点。 + +您可以使用为 IVYO 的灾难恢复功能提供支持的备份管理工具 https://www.pgbackrest.org[pgBackRest] 的 https://pgbackrest.org/command.html#command-restore[restore] 命令来设置 PITR。您需要在 `spec.dataSource.ivorycluster.options` 上设置一些选项来执行 PITR。这些选项包括: + +- `--type=time`:这告诉 pgBackRest 执行 PITR。 +- `--target`:执行 PITR 的目标位置。恢复目标的一个示例是 `2021-06-09 14:15:11-04`。此处指定的时区为 -04,即东部夏令时。有关其他时区选项,请参阅 https://pgbackrest.org/user-guide.html#pitr[pgBackRest 文档]。 +- `--set`(可选):选择从哪个备份开始 PITR。 + +开始前的一些快速说明: + +- 要执行 PITR,您必须有一个在 PITR 时间之前完成的备份。换句话说,您不能对没有备份的时间执行 PITR! +- 所有相关的 WAL 文件必须成功推送,以便恢复正确完成。 +- 确保选择包含所需备份的正确仓库名称! + +考虑到这一点,让我们使用上面的 `elephant` 示例。假设我们要执行到 `2021-06-09 14:15:11-04` 的时间点恢复 (PITR),我们可以使用以下清单: + +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: elephant +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 + options: + - --type=time + - --target="2021-06-09 14:15:11-04" + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +需要注意的部分是: + +---- +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 + options: + - --type=time + - --target="2021-06-09 14:15:11-04" +---- + +请注意我们如何放入选项以指定进行 PITR 的位置。 + +使用上述清单,IVYO 将继续创建一个恢复其数据直到 `2021-06-09 14:15:11-04` 的新 Ivory 集群。此时,集群被提升,您可以从该特定时间点开始访问您的数据库! + +=== 执行就地时间点恢复 (PITR) + +与上面描述的 PITR 恢复类似,您可能希望执行类似的回退到更改发生之前的状态,但不创建另一个 IvorySQL 集群。幸运的是,IVYO 也可以帮助您做到这一点。 + +您可以使用为 IVYO 的灾难恢复功能提供支持的备份管理工具 https://www.pgbackrest.org[pgBackRest] 的 https://pgbackrest.org/command.html#command-restore[restore] 命令来设置 PITR。您需要在 `spec.backups.pgbackrest.restore.options` 上设置一些选项来执行 PITR。这些选项包括: + +- `--type=time`:这告诉 pgBackRest 执行 PITR。 +- `--target`:执行 PITR 的目标位置。恢复目标的一个示例是 `2021-06-09 14:15:11-04`。 +- `--set`(可选):选择从哪个备份开始 PITR。 + +开始前的一些快速说明: + +- 要执行 PITR,您必须有一个在 PITR 时间之前完成的备份。换句话说,您不能对没有备份的时间执行 PITR! +- 所有相关的 WAL 文件必须成功推送,以便恢复正确完成。 +- 确保选择包含所需备份的正确仓库名称! + +要执行就地恢复,用户首先需要填写规范的恢复部分,如下所示: + +---- +spec: + backups: + pgbackrest: + restore: + enabled: true + repoName: repo1 + options: + - --type=time + - --target="2021-06-09 14:15:11-04" +---- + +然后,要触发恢复,您需要使用以下命令注释 ivorycluster: + +---- +kubectl annotate -n ivory-operator ivorycluster hippo --overwrite \ + ivory-operator.ivorysql.org/pgbackrest-restore=id1 +---- + +恢复完成后,可以禁用就地恢复: + +---- +spec: + backups: + pgbackrest: + restore: + enabled: false +---- + +请注意我们如何放入选项以指定进行 PITR 的位置。 + +使用上述清单,IVYO 将继续重新创建您的 Ivory 集群,以恢复其数据直到 `2021-06-09 14:15:11-04`。此时,集群被提升,您可以从该特定时间点开始访问您的数据库! + +=== 恢复单个数据库 + +出于性能原因或为了将选定数据库移动到没有足够空间来恢复整个集群备份的计算机上,您可能需要从备份中恢复特定数据库。 + +[WARNING] +==== +pgBackRest 支持这种情况,但请务必确保这是您想要的。以这种方式恢复将从备份中恢复请求的数据库并使其可访问,但备份中的所有其他数据库在恢复后将**无法**访问。 + +例如,如果您的备份包含数据库 `test1`、`test2` 和 `test3`,并且您请求恢复 `test2`,则恢复完成后,`test1` 和 `test3` 数据库将**无法**访问。请查看 pgBackRest 文档中关于 https://pgbackrest.org/user-guide.html#restore/option-db-include[恢复单个数据库的限制]。 +==== + +您可以使用类似于以下规范的规范从备份中恢复单个数据库: + +[source,yaml] +---- +spec: + backups: + pgbackrest: + restore: + enabled: true + repoName: repo1 + options: + - --db-include=hippo +---- + +其中 `--db-include=hippo` 将仅恢复 `hippo` 数据库的内容。 + +=== 备用集群 + +高级高可用性和灾难恢复策略涉及将您的数据库集群分布在数据中心之间,以帮助最大化正常运行时间。IVYO 提供了使用外部存储系统或 IvorySQL 流复制部署可以跨越多个 Kubernetes 集群的 ivorycluster 的方法。https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/disaster-recovery.md[灾难恢复架构] 文档中提供了 IVYO 备用集群的高级概述。 + +==== 创建备用集群 + +本教程部分将描述如何创建三种不同类型的备用集群,一种使用外部存储系统,一种直接从主集群流式传输数据,一种利用外部存储和流式传输。这些示例集群可以在同一个 Kubernetes 集群中使用单个 IVYO 实例创建,也可以通过正确的存储和网络配置分布在不同的 Kubernetes 集群和 IVYO 实例中。 + +===== 基于仓库的备用集群 + +基于仓库的备用集群将从存储在外部存储中的 pgBackRest 仓库中恢复。主集群应使用基于云的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md[备份配置] 创建。以下清单定义了一个 ivorycluster,其中 `standby.enabled` 设置为 true,并且 `repoName` 配置为指向主集群中配置的 `s3` 仓库: + +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo-standby +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + standby: + enabled: true + repoName: repo1 +---- + +===== 流式备用集群 + +流式备用集群依赖于通过网络到主集群的经过身份验证的连接。主集群应可通过网络访问并允许 TLS 身份验证(默认启用 TLS)。在以下清单中,我们将 `standby.enabled` 设置为 `true`,并提供了指向主集群的 `host` 和 `port`。我们还定义了 `customTLSSecret` 和 `customReplicationTLSSecret` 以提供允许备用集群向主集群进行身份验证的证书。对于这种类型的备用集群,您必须使用 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[自定义 TLS]: + +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo-standby +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + customTLSSecret: + name: cluster-cert + customReplicationTLSSecret: + name: replication-cert + standby: + enabled: true + host: "192.0.2.2" + port: 5432 +---- + +===== 具有外部仓库的流式备用集群 + +另一个选项是使用从主集群流式传输的外部 pgBackRest 仓库创建备用集群。通过此设置,如果流式复制落后,备用集群将继续从 pgBackRest 仓库恢复。在此清单中,我们启用了前两个示例中的设置: + +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo-standby +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + customTLSSecret: + name: cluster-cert + customReplicationTLSSecret: + name: replication-cert + standby: + enabled: true + repoName: repo1 + host: "192.0.2.2" + port: 5432 +---- + +=== 提升备用集群 + +在某些时候,您会希望提升备用集群以开始接受读取和写入。这具有将 WAL(事务归档)推送到 pgBackRest 仓库的净效应,因此我们需要确保我们不会意外创建脑裂场景。如果两个主实例尝试写入同一个仓库,则可能会发生脑裂。如果主集群仍处于活动状态,请确保在尝试提升备用集群之前 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/administrative-tasks.md#shutdown[关闭] 主集群。 + +一旦主集群处于非活动状态,我们可以通过移除或禁用其 `spec.standby` 部分来提升备用集群: + +---- +spec: + standby: + enabled: false +---- + +此更改触发将备用领导者提升为 IvorySQL 主实例,并且集群开始接受写入。 + +=== 从 S3 / GCS / Azure Blob 存储中存储的备份克隆 {#cloud-based-data-source} + +您可以从存储在 AWS S3(或使用 S3 协议的存储系统)、GCS 或 Azure Blob 存储中的备份克隆 Ivory 集群,而无需活动的 Ivory 集群!方法与从现有 ivorycluster 克隆类似。如果您希望为人们提供数据集但将其压缩在更便宜的存储上,这很有用。 + +出于本示例的目的,假设您创建了一个名为 `hippo` 的 Ivory 集群,其备份存储在 S3 中,看起来类似于以下内容: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/hippo/repo1 + manual: + repoName: repo1 + options: + - --type=full + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" +---- + +确保 `ivyo-s3-creds` 中的凭据与您的 S3 凭据匹配。有关 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md#using-s3[使用 S3 部署 Ivory 集群进行备份] 的更多详细信息,请参阅教程的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md#using-s3[备份] 部分。 + +为了从活跃集群创建新集群时获得最佳性能,请确保对前一个集群进行了最近的完整备份。上面的清单设置为进行完整备份。假设 `hippo` 是在 `ivory-operator` 命名空间中创建的,您可以使用以下命令触发完整备份: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo --overwrite \ + ivory-operator.ivorysql.org/pgbackrest-backup="$( date '+%F_%H:%M:%S' )" +---- + +等待备份完成。完成后,您可以删除 Ivory 集群。 + +现在,让我们将 `hippo` 备份中的数据克隆到一个名为 `elephant` 的新集群中。您可以使用类似于以下的清单: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: elephant +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + dataSource: + pgbackrest: + stanza: db + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/hippo/repo1 + repo: + name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/elephant/repo1 + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" +---- + +在此清单中需要注意以下几点。首先,请注意我们新的 ivorycluster 中的 `spec.dataSource.pgbackrest` 对象与旧的 ivorycluster 中的 `spec.backups.pgbackrest` 对象非常相似,但略有不同。主要区别是: + +1. 从基于云的数据源恢复时不需要镜像 +2. 从基于云的数据源恢复时,`stanza` 是必填字段 +3. `backups.pgbackrest` 有一个 `repos` 字段,这是一个数组 +4. `dataSource.pgbackrest` 有一个 `repo` 字段,这是一个单一对象 + +还要注意相似之处: + +1. 我们正在为两者重用密钥(因为新的恢复 pod 需要具有与原始备份 pod 相同的凭据) +2. `repo` 对象是相同的 +3. `global` 对象是相同的 + +这是因为 `elephant` ivorycluster 的新恢复 pod 将需要重用最初设置 `hippo` ivorycluster 时使用的配置和凭据。 + +在此示例中,我们正在创建一个新的集群,该集群也备份到同一个 S3 存储桶;只有 `spec.backups.pgbackrest.global` 字段已更改为指向不同的路径。这将确保新的 `elephant` 集群将预填充来自 `hippo` 备份的数据,但将备份到自己的文件夹,确保原始备份仓库得到适当保留。 + +部署此清单以创建 `elephant` Ivory 集群。观察它启动并运行: + +[source,shell] +---- +kubectl -n ivory-operator describe ivorycluster elephant +---- + +当它准备就绪时,您将看到预期实例的数量与就绪实例的数量相匹配,例如: + +---- +Instances: + Name: 00 + Ready Replicas: 1 + Replicas: 1 + Updated Replicas: 1 +---- + +前面的示例展示了如何使用现有的 S3 仓库预填充 ivorycluster,同时使用新的 S3 仓库进行备份。但是使用基于云的数据源的 ivorycluster 也可以使用本地仓库。 + +例如,假设一个名为 `rhino` 的 ivorycluster 旨在从原始的 `hippo` ivorycluster 预填充,清单将如下所示: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: rhino +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + dataSource: + pgbackrest: + stanza: db + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/hippo/repo1 + repo: + name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +== 监控 +虽然拥有 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/high-availability.md[高可用性] 和 https://github.com/Ivory-operator/blob/master/docs/content/tutorial/disaster-recovery.md[灾难恢复] 系统可以在您的 IvorySQL 集群出现问题时提供帮助,但监控可以帮助您预防问题的发生。此外,监控可以帮助您诊断和解决可能导致性能下降的问题,而不是停机。 + +让我们看看 IVYO 如何允许您在集群中启用监控。 + +=== 添加 Exporter Sidecar + +让我们看看如何使用 https://github.com/CrunchyData/postgres-operator-examples[Postgres Operator 示例] 仓库中的 `kustomize/ivory` 示例将 IvorySQL Exporter sidecar 添加到您的集群中。 + +监控工具是使用自定义资源的 `spec.monitoring` 部分添加的。目前,唯一支持的监控工具是使用 https://github.com/CrunchyData/pgmonitor[pgMonitor] 配置的 IvorySQL Exporter。 + +在 `kustomize/ivory/ivory.yaml` 文件中,将以下 YAML 添加到规范中: + +[source,yaml] +---- +monitoring: + pgmonitor: + exporter: + image: {{< param imagePostgresExporter >}} +---- + +保存您的更改并运行: + +[source,shell] +---- +kubectl apply -k kustomize/ivory +---- + +IVYO 将检测到更改并将 Exporter sidecar 添加到集群中存在的所有 Ivory Pod 中。IVYO 还将完成允许 Exporter 连接到数据库并使用 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/monitoring.md#ivyo-monitoring[IVYO 监控] 堆栈收集指标的工作。 + +==== 为 Exporter 配置 TLS 加密 + +IVYO 允许您配置 exporter sidecar 以使用 TLS 加密。如果您通过 exporter 规范提供自定义 TLS Secret: + +[source,yaml] +---- + monitoring: + pgmonitor: + exporter: + customTLSSecret: + name: hippo.tls +---- + +与 IVYO 可以配置的其他自定义 TLS Secret 一样,Secret 需要在与您的 PostgresCluster 相同的命名空间中创建。它还应该包含启用加密所需的 TLS 密钥 (`tls.key`) 和 TLS 证书 (`tls.crt`)。 + +[source,yaml] +---- +data: + tls.crt: + tls.key: +---- + +为 exporter 配置 TLS 后,您将需要更新您的 Prometheus 部署以使用 TLS,并且与 exporter 的连接将被加密。查看 https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config[Prometheus] 文档,了解有关为 https://prometheus.io/[Prometheus] 配置 TLS 的更多信息。 + +=== 访问指标 + +在您的集群中启用 IvorySQL Exporter 后,请按照 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/monitoring.md#ivyo-monitoring[IVYO 监控] 中概述的步骤安装监控堆栈。这将允许您在 Kubernetes 中部署 https://github.com/CrunchyData/pgmonitor[pgMonitor] 配置的 https://prometheus.io/[Prometheus]、https://grafana.com/[Grafana] 和 https://prometheus.io/docs/alerting/latest/alertmanager/[Alertmanager] 监控工具。这些工具将默认设置为连接到您的 Ivory Pod 上的 Exporter 容器。 + +=== 配置监控 +虽然默认的 Kustomize 安装应该在大多数 Kubernetes 环境中工作,但可能需要根据您的特定需求进一步自定义项目。 + +例如,默认情况下,`fsGroup` 设置为 `26`,用于为组成 IVYO 监控堆栈的各种部署定义的 `securityContext`: + +[source,yaml] +---- +securityContext: + fsGroup: 26 +---- + +在大多数 Kubernetes 环境中,此设置是必需的,以确保容器内的进程具有写入组成 IVYO 监控堆栈的每个 Pod 挂载的任何卷所需的权限。但是,在 OpenShift 环境中安装时(更具体地说,当使用 `restricted` 安全上下文约束时),应删除 `fsGroup` 设置,因为 OpenShift 将自动处理在 Pod 的 `securityContext` 中设置适当的 `fsGroup`。 + +此外,在同一部分中,可能还需要根据您的特定存储配置修改 `supplmentalGroups` 设置: + +[source,yaml] +---- +securityContext: + supplementalGroups : 65534 +---- + +因此,应修改和/或修补(例如,使用额外的覆盖)`kustomize/monitoring` 下的以下文件,以确保 `securityContext` 为您的 Kubernetes 环境正确定义: + +- `deploy-alertmanager.yaml` +- `deploy-grafana.yaml` +- `deploy-prometheus.yaml` + +为了修改 IVYO 监控安装程序创建的各种存储资源(即 PersistentVolumeClaims)的配置,还可以修改 `kustomize/monitoring/pvcs.yaml` 文件。 + +此外,还可以通过修改以下配置资源来进一步自定义组成 IVYO 监控堆栈的各种组件(Grafana、Prometheus 和/或 AlertManager)的配置: + +- `alertmanager-config.yaml` +- `alertmanager-rules-config.yaml` +- `grafana-datasources.yaml` +- `prometheus-config.yaml` + +最后,请注意,可以通过修改 `kustomize/monitoring/grafana-secret.yaml` 文件中的 Grafana Secret 来更新 Grafana 的默认用户名和密码。 + +=== 安装 + +一旦 Kustomize 项目根据您的特定需求进行了修改,就可以使用 `kubectl` 和 Kustomize 安装 IVYO 监控: + +[source,shell] +---- +kubectl apply -k kustomize/monitoring +---- + +=== 卸载 + +同样,一旦安装了 IVYO 监控,就可以使用 `kubectl` 和 Kustomize 卸载它: + +[source,shell] +---- +kubectl delete -k kustomize/monitoring +---- + +== 连接池 +连接池有助于扩展和维护应用程序与数据库之间的整体可用性。IVYO 通过支持 https://www.pgbouncer.org/[PgBouncer] 连接池和状态管理器来促进这一点。 + +让我们看看我们如何添加连接池并将其连接到我们的应用程序! + +=== 添加连接池 +让我们看看如何使用 https://github.com/IvorySQL/ivory-operator[Ivory Operator] 仓库示例文件夹中的 `kustomize/keycloak` 示例添加连接池。 + +连接池是使用自定义资源的 `spec.proxy` 部分添加的。目前,唯一支持的连接池是 https://www.pgbouncer.org/[PgBouncer]。 + +添加 PgBouncer 连接池的唯一必需属性是设置 `spec.proxy.pgBouncer.image` 属性。在 `kustomize/keycloak/ivory.yaml` 文件中,将以下 YAML 添加到规范中: + +[source,yaml] +---- +proxy: + pgBouncer: + image: {{< param imageIvoryPGBouncer >}} +---- + +(您也可以在 `kustomize/examples/high-availability` 示例中找到此示例)。 + +保存您的更改并运行: + +[source,shell] +---- +kubectl apply -k kustomize/keycloak +---- + +IVYO 将检测到更改并创建一个新的 PgBouncer Deployment! + +设置起来相当容易,所以现在让我们看看如何将我们的应用程序连接到连接池。 + +=== 连接到连接池 +当连接池部署到集群时,IVYO 会将附加信息添加到用户 Secret 中,以允许应用程序直接连接到连接池。回想一下,在此示例中,我们的用户 Secret 称为 `keycloakdb-pguser-keycloakdb`。描述用户 Secret: + +[source,shell] +---- +kubectl -n ivory-operator describe secrets keycloakdb-pguser-keycloakdb +---- + +您应该看到此 Secret 中包含几个新属性,允许您通过连接池连接到您的 Ivory 实例: + +- `pgbouncer-host`:PgBouncer 连接池的主机名。这引用了 PgBouncer 连接池的 https://kubernetes.io/docs/concepts/services-networking/service/[Service]。 +- `pgbouncer-port`:PgBouncer 连接池正在侦听的端口。 +- `pgbouncer-uri`:一个 https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING[PostgreSQL 连接 URI],提供通过 PgBouncer 连接池登录到 Ivory 数据库的所有信息。 +- `pgbouncer-jdbc-uri`:一个 https://jdbc.postgresql.org/documentation/use/[PostgreSQL JDBC 连接 URI],提供通过使用 JDBC 驱动程序的 PgBouncer 连接池登录到 Ivory 数据库的所有信息。请注意,默认情况下,连接字符串禁用 JDBC 管理预处理事务以实现 https://www.pgbouncer.org/faq.html#how-to-use-prepared-statements-with-transaction-pooling[与 PgBouncer 一起使用的最佳方式]。 + +打开 `kustomize/keycloak/keycloak.yaml` 中的文件。更新 `DB_ADDR` 和 `DB_PORT` 值如下: + +[source,yaml] +---- +- name: DB_ADDR + valueFrom: { secretKeyRef: { name: keycloakdb-pguser-keycloakdb, key: pgbouncer-host } } +- name: DB_PORT + valueFrom: { secretKeyRef: { name: keycloakdb-pguser-keycloakdb, key: pgbouncer-port } } +---- + +这会更改 Keycloak 的配置,使其现在通过连接池连接。 + +应用更改: + +[source,shell] +---- +kubectl apply -k kustomize/keycloak +---- + +Kubernetes 将检测到更改并开始部署新的 Keycloak Pod。完成后,Keycloak 现在将通过 PgBouncer 连接池连接到 Ivory! + +=== TLS +IVYO 通过 TLS 部署每个集群和组件。这包括 PgBouncer 连接池。如果您使用自己的 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[自定义 TLS 设置],则需要在 `spec.proxy.pgBouncer.customTLSSecret` 中为 PgBouncer 提供 Secret 引用。 + +PgBouncer 的 TLS 证书应具有与 PgBouncer Service 名称匹配的通用名称 (CN)。这是集群的名称,后缀为 `-pgbouncer`。例如,对于我们的 `hippo` 集群,这将是 `hippo-pgbouncer`。对于 `keycloakdb` 示例,它将是 `keycloakdb-pgBouncer`。 + +要自定义 PgBouncer 的 TLS,您需要在您的 Ivory 集群的命名空间中创建一个 Secret,其中包含要使用的 TLS 密钥 (`tls.key`)、TLS 证书 (`tls.crt`) 和 CA 证书 (`ca.crt`)。Secret 应包含以下值: + +[source,yaml] +---- +data: + ca.crt: + tls.crt: + tls.key: +---- + +例如,如果您本地计算机上存储有名为 `ca.crt`、`keycloakdb-pgBouncer.key` 和 `keycloakdb-pgBouncer.crt` 的文件,则可以运行以下命令: + +[source,shell] +---- +kubectl create secret generic -n ivory-operator keycloakdb-pgBouncer.tls \ + --from-file=ca.crt=ca.crt \ + --from-file=tls.key=keycloakdb-pgBouncer.key \ + --from-file=tls.crt=keycloakdb-pgBouncer.crt +---- + +您可以在您的 `ivorycluster.ivory-operator.ivorysql.org` 自定义资源中的 `spec.proxy.pgBouncer.customTLSSecret.name` 字段中指定自定义 TLS Secret,例如: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + customTLSSecret: + name: keycloakdb-pgBouncer.tls +---- + +=== 自定义 +PgBouncer 连接池是高度可定制的,从配置和 Kubernetes 部署角度来看都是如此。让我们探索一些您可以进行的自定义! + +==== 配置 +可以通过 `spec.proxy.pgBouncer.config` 自定义 https://www.pgbouncer.org/config.html[PgBouncer 配置]。进行配置更改后,IVYO 会将它们推出到任何 PgBouncer 实例,并自动发出“重新加载”。 + +您可以通过以下几种方式自定义配置: + +- `spec.proxy.pgBouncer.config.global`:接受键值对,这些更改全局应用于 PgBouncer。 +- `spec.proxy.pgBouncer.config.databases`:接受键值对,这些键值对代表 PgBouncer https://www.pgbouncer.org/config.html#section-databases[数据库定义]。 +- `spec.proxy.pgBouncer.config.users`:接受键值对,这些键值对代表 https://www.pgbouncer.org/config.html#section-users[应用于特定用户的连接设置]。 +- `spec.proxy.pgBouncer.config.files`:接受文件列表,这些文件挂载在 `/etc/pgbouncer` 目录中,并在使用 PgBouncer 的 https://www.pgbouncer.org/config.html#include-directive[include 指令] 考虑任何其他选项之前加载。 + +例如,要将连接池模式设置为 `transaction`,您需要设置以下配置: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + config: + global: + pool_mode: transaction +---- + +有关 https://www.pgbouncer.org/config.html[PgBouncer 配置] 的参考,请参阅: + +https://www.pgbouncer.org/config.html + +==== 副本 +默认情况下,IVYO 部署一个 PgBouncer 实例。您可能希望运行多个 PgBouncer 实例以具有一定的冗余级别,尽管您仍然希望注意有多少连接将连接到您的 Ivory 数据库! + +您可以通过 `spec.proxy.pgBouncer.replicas` 属性管理部署的 PgBouncer 实例的数量。 + +==== 资源 +您可以通过 `spec.proxy.pgBouncer.resources` 属性管理分配给 PgBouncer 实例的 CPU 和内存资源。`spec.proxy.pgBouncer.resources` 的布局应该很熟悉:它遵循设置 https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[容器资源] 的标准 Kubernetes 结构。 + +例如,假设我们想为 PgBouncer 实例设置一些 CPU 和内存限制。我们可以添加以下配置: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + resources: + limits: + cpu: 200m + memory: 128Mi +---- + +由于 IVYO 使用 https://kubernetes.io/docs/concepts/workloads/controllers/deployment/[Deployment] 部署 PgBouncer 实例,因此这些更改会使用滚动更新推出,以最大程度地减少应用程序与 Ivory 实例之间的中断! + +==== 注释 / 标签 +您可以通过 `spec.proxy.pgBouncer.metadata.annotations` 和 `spec.proxy.pgBouncer.metadata.labels` 属性分别为您的 PgBouncer 实例应用自定义注释和标签。请注意,对这两个属性中的任何一个的任何更改都将优先于您添加的任何其他自定义标签。 + +==== Pod 反亲和性 / Pod 亲和性 / 节点亲和性 +您可以通过 `spec.proxy.pgBouncer.affinity` 属性控制 https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity[pod 反亲和性、pod 亲和性和节点亲和性],特别是: + +- `spec.proxy.pgBouncer.affinity.nodeAffinity`:控制 PgBouncer 实例的节点亲和性。 +- `spec.proxy.pgBouncer.affinity.podAffinity`:控制 PgBouncer 实例的 Pod 亲和性。 +- `spec.proxy.pgBouncer.affinity.podAntiAffinity`:控制 PgBouncer 实例的 Pod 反亲和性。 + +以上每个都遵循 https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity[设置亲和性的标准 Kubernetes 规范]。 + +例如,要为 `kustomize/keycloak` 示例设置首选 Pod 反亲和性规则,您需要向配置中添加以下内容: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/cluster: keycloakdb + ivory-operator.ivorysql.org/role: pgbouncer + topologyKey: kubernetes.io/hostname +---- + +==== 容忍度 +您可以通过设置 https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[容忍度] 通过 `spec.proxy.pgBouncer.tolerations` 将 PgBouncer 实例部署到 https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[具有污点的节点]。此属性遵循 Kubernetes 标准容忍度布局。 + +例如,如果有一组具有 `role=connection-poolers:NoSchedule` 污点的节点,您希望将 PgBouncer 实例调度到这些节点,您可以应用以下配置: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + tolerations: + - effect: NoSchedule + key: role + operator: Equal + value: connection-poolers +---- + +请注意,设置容忍度并不一定意味着 PgBouncer 实例将被分配给具有这些污点的节点。容忍度充当**密钥**:它们允许您访问节点。如果您希望确保您的 PgBouncer 实例部署到特定节点,您需要将设置容忍度与节点亲和性相结合。 + +==== Pod 分布约束 +除了使用亲和性、反亲和性和容忍度之外,您还可以通过 `spec.proxy.pgBouncer.topologySpreadConstraints` 设置 https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/[拓扑分布约束]。此属性遵循 Kubernetes 标准拓扑分布约束布局。 + +例如,由于我们的每个 pgBouncer Pod 都将设置标准的 `ivory-operator.ivorysql.org/role: pgbouncer` 标签,我们可以在确定 `maxSkew` 时使用此标签。在下面的示例中,由于我们有 3 个节点,`maxSkew` 为 1,并且我们将 `whenUnsatisfiable` 设置为 `ScheduleAnyway`,我们理想情况下应该在每个节点上看到 1 个 Pod,但如果其他约束阻止这种情况发生,我们的 Pod 可以分布得不那么均匀。 + +[source,yaml] +---- + proxy: + pgBouncer: + replicas: 3 + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/role: pgbouncer +---- + +如果您希望确保您的 PgBouncer 实例分布得更均匀(或根本不部署),您需要将 `whenUnsatisfiable` 更新为 `DoNotSchedule`。 + +== 管理任务 + +=== 手动重启 IvorySQL + +有时您可能需要手动重启 IvorySQL。这可以通过向集群的 `spec.metadata.annotations` 部分添加或更新自定义注释来完成。IVYO 将检测到更改并执行 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/high-availability.md#rolling-update[滚动重启]。 + +例如,如果您在命名空间 `ivory-operator` 中有一个名为 `hippo` 的集群,您只需要使用以下命令修补 hippo ivorycluster: + +[source,shell] +---- +kubectl patch ivorycluster/hippo -n ivory-operator --type merge \ + --patch '{"spec":{"metadata":{"annotations":{"restarted":"'"$(date)"'"}}}}' +---- + +观察您的 hippo 集群:您将看到滚动更新已触发,重启已开始。 + +=== 关闭 + +您可以通过将 `spec.shutdown` 属性设置为 `true` 来关闭 Ivory 集群。您可以通过编辑清单来执行此操作,或者在 `hippo` 集群的情况下,执行如下命令: + +[source,shell] +---- +kubectl patch ivorycluster/hippo -n ivory-operator --type merge \ + --patch '{"spec":{"shutdown": true}}' +---- + +这样做的结果是,此集群的所有 Kubernetes 工作负载都缩放为 0。您可以使用以下命令验证这一点: + +[source,shell] +---- +kubectl get deploy,sts,cronjob --selector=ivory-operator.ivorysql.org/cluster=hippo -n ivory-operator + +NAME READY AGE +statefulset.apps/hippo-00-lwgx 0/0 1h + +NAME SCHEDULE SUSPEND ACTIVE +cronjob.batch/hippo-repo1-full @daily True 0 +---- + +要将已关闭的 Ivory 集群重新打开,您可以将 `spec.shutdown` 设置为 `false`。 + +=== 暂停协调和部署 + +您可以通过将 `spec.paused` 属性设置为 `true` 来暂停 Ivory 集群协调过程。您可以通过编辑清单来执行此操作,或者在 `hippo` 集群的情况下,执行如下命令: + +[source,shell] +---- +kubectl patch ivorycluster/hippo -n ivory-operator --type merge \ + --patch '{"spec":{"paused": true}}' +---- + +暂停集群将暂停对集群当前状态的任何更改,直到协调恢复。这允许您完全控制何时将 ivorycluster spec 的更改部署到 Ivory 集群。在暂停期间,除了“Progressing”条件外,不会更新任何状态。 + +要恢复 Ivory 集群的协调,您可以将 `spec.paused` 设置为 `false` 或从清单中删除该设置。 + +=== 轮换 TLS 证书 + +应尽可能频繁地使凭据失效并替换(轮换)它们,以最大限度地降低其被滥用的风险。与密码不同,每个 TLS 证书都有一个过期日期,因此替换它们是不可避免的。 + +实际上,IVYO 会在证书过期日期 *之前* 自动轮换其管理的客户端证书。将在其工作持续时间的 2/3 之后生成新的客户端证书;因此,例如,IVYO 创建的证书在 12 个月后过期,将在大约第 8 个月时被 IVYO 替换。这样做是为了让您不必担心遇到过期证书的问题或服务中断。 + +==== 触发证书轮换 + +如果您想轮换单个客户端证书,您可以通过从其证书 Secret 中删除 `tls.key` 字段来重新生成现有集群的证书。 + +是时候轮换您的 IVYO 根证书了吗?您只需要删除 `ivyo-root-cacert` secret。IVYO 将无缝地重新生成并推出它,确保您的应用程序继续与 Ivory 集群通信,而无需更新任何配置或处理任何停机时间。 + +[source,bash] +---- +kubectl delete secret ivyo-root-cacert +---- + +[NOTE] +==== +IVYO 仅更新包含生成的根证书的 secret。它不会触及自定义证书。 +==== + +==== 轮换自定义 TLS 证书 + +当您使用自己的 TLS 证书与 IVYO 时,您有责任适当地替换它们。方法如下。 + +IVYO 会自动检测并加载对 IvorySQL 服务器和复制 Secret 内容的更改,而不会停机。您或您的证书管理器只需要替换 `spec.customTLSSecret` 引用的 Secret 中的值。 + +如果您将 `spec.customTLSSecret` 更改为引用新的 Secret 或新的字段,IVYO 将执行 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/high-availability.md#rolling-update[滚动重启]。 + +[IMPORTANT] +==== +更改 IvorySQL 证书颁发机构时,请确保同时更新 https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[`customReplicationTLSSecret`]。 +==== + +=== 更改主节点 + +有时您可能希望更改 HA 集群中的主节点。这可以通过使用 ivorycluster spec 的 `patroni.switchover` 部分来完成。它允许您在 ivoryclusters 中启用切换,将特定实例作为新的主节点,并在您的 ivorycluster 进入不良状态时运行故障转移。 + +让我们完成执行切换的过程! + +首先,您需要更新您的 spec 以准备您的集群以更改主节点。编辑您的 spec 以具有以下字段: + +[source,yaml] +---- +spec: + patroni: + switchover: + enabled: true +---- + +应用此更改后,IVYO 将寻找触发器以在您的集群中执行切换。您将通过将 `ivory-operator.ivorysql.org/trigger-switchover` 注释添加到您的自定义资源来触发切换。设置此注释的最佳方法是使用时间戳,这样您就知道何时启动了更改。 + +例如,对于我们的 `hippo` 集群,我们可以运行以下命令来触发切换: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo \ + ivory-operator.ivorysql.org/trigger-switchover="$(date)" +---- + +[TIP] +==== +如果您想执行另一次切换,您可以重新运行注释命令并添加 `--overwrite` 标志: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo --overwrite \ + ivory-operator.ivorysql.org/trigger-switchover="$(date)" +---- +==== + +IVYO 将检测到此注释并使用 Patroni API 请求更改当前主节点! + +随着 Patroni 的工作,您的数据库实例 Pod 上的角色将开始更改。新的主节点将具有 `master` 角色标签,旧的主节点将更新为 `replica`。 + +切换的状态将使用 `status.patroni.switchover` 字段进行跟踪。这将设置为您在触发器注释中定义的值。如果您使用时间戳作为注释,这是确定何时请求切换的另一种方法。 + +在实例 Pod 标签已更新且 `status.patroni.switchover` 已设置后,主节点已在您的集群上更改! + +[IMPORTANT] +==== +更改主节点后,我们建议您通过将 `spec.patroni.switchover.enabled` 设置为 false 或完全从您的 spec 中删除该字段来禁用切换。如果该字段被删除,相应的状态也将从 ivorycluster 中删除。 +==== + +==== 定位实例 + +切换主节点时,您可以选择的另一个选项是提供目标实例作为新的主节点。在执行切换时,此目标实例将用作候选节点。`spec.patroni.switchover.targetInstance` 字段接受您要切换到的实例的名称。 + +此名称可以在几个不同的地方找到;一个是 StatefulSet 的名称,另一个是数据库 Pod 上的 `ivory-operator.ivorysql.org/instance` 标签。以下命令可以帮助您确定谁是当前主节点以及用作 `targetInstance` 的名称: + +[source,shell-session] +---- +$ kubectl get pods -l ivory-operator.ivorysql.org/cluster=hippo \ + -L ivory-operator.ivorysql.org/instance \ + -L ivory-operator.ivorysql.org/role -n ivory-operator + +NAME READY STATUS RESTARTS AGE INSTANCE ROLE +hippo-instance1-jdb5-0 3/3 Running 0 2m47s hippo-instance1-jdb5 master +hippo-instance1-wm5p-0 3/3 Running 0 2m47s hippo-instance1-wm5p replica +---- + +在我们的示例集群中,`hippo-instance1-jdb5` 当前是主节点,这意味着我们希望在切换中定位 `hippo-instance1-wm5p`。现在您知道哪个实例当前是主节点以及如何找到您的 `targetInstance`,让我们更新您的集群 spec: + +[source,yaml] +---- +spec: + patroni: + switchover: + enabled: true + targetInstance: hippo-instance1-wm5p +---- + +应用此更改后,您将再次需要通过注释 ivorycluster 来触发切换(请参见上面的命令)。您可以通过检查 Pod 角色标签和 `status.patroni.switchover` 来验证切换是否已完成。 + +==== 故障转移 + +最后,当您的集群进入不健康状态时,我们可以选择进行故障转移。完成此操作所需的唯一 spec 更改是将 `spec.patroni.switchover.type` 字段更新为 `Failover` 类型。需要注意的是,执行故障转移时需要 `targetInstance`。基于上面的示例集群,假设 `hippo-instance1-wm5p` 仍然是副本,我们可以更新 spec: + +[source,yaml] +---- +spec: + patroni: + switchover: + enabled: true + targetInstance: hippo-instance1-wm5p + type: Failover +---- + +应用此 spec 更改后,您的 ivorycluster 将准备好执行故障转移。同样,您需要通过注释 ivorycluster 来触发切换(请参见上面的命令),并验证 Pod 角色标签和 `status.patroni.switchover` 是否已相应更新。 + +[WARNING] +==== +切换过程中遇到的错误可能会使您的集群处于不良状态。如果您遇到问题,请在 operator 日志中找到问题,您可以更新 spec 以修复问题并应用更改。应用更改后,IVYO 将尝试再次执行切换。 +==== + +== 删除Ivory集群 + +总有一个时刻,您需要删除自己的集群。如果您一直在跟着示例操作,只需运行下面一条命令即可删除 Ivory 集群: + +[source,shell] +---- +kubectl delete -k examples/kustomize/ivory +---- + +IVYO 会清理与该集群相关的所有对象。 + +关于数据保留:这取决于您 PVC 的 https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming[回收策略]。如需了解 Kubernetes 如何管理数据保留,请参阅 https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming[Kubernetes 官方文档关于卷回收的说明]。 diff --git a/CN/modules/ROOT/pages/master/4.6.3.adoc b/CN/modules/ROOT/pages/master/4.6.3.adoc new file mode 100644 index 0000000..d1b5706 --- /dev/null +++ b/CN/modules/ROOT/pages/master/4.6.3.adoc @@ -0,0 +1,194 @@ + +:sectnums: +:sectnumlevels: 5 + += Docker Swarm & Docker Compose 部署IvorySQL高可用集群 + +准备三个网络互通的服务器,并搭建swarm集群。 +测试集群名称及对应ip地址如下: + +manager-node1: 192.168.21.205 + +manager-node2: 192.168.21.164 + +manager-node3: 192.168.21.51 + +``` +[root@manager-node1 docker-swarm]# docker node ls +ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION +y9d9wd9t2ncy4t9bvw6bg9sjs * manager-node1 Ready Active Reachable 26.1.4 +iv17o6m9t9e06vd9iu1o6damd manager-node2 Ready Active Leader 25.0.4 +vjnax76qj812mlvut6cv4qotl manager-node3 Ready Active Reachable 24.0.6 +``` + +== docker swarm搭建IvorySQL HA Cluster +下载源码 +``` +[root@manager-node1 ~]# git clone https://github.com/IvorySQL/docker_library.git +[root@manager-node1 ~]# cd docker_library/docker-cluster/docker-swarm +``` + +部署一个三节点的etcd +``` +[root@manager-node1 docker-swarm]# docker stack deploy -c docker-swarm-etcd.yml ivoryhac-etcd +Creating network ivoryhac-etcd_etcd-net +Creating service ivoryhac-etcd_etcd3 +Creating service ivoryhac-etcd_etcd1 +Creating service ivoryhac-etcd_etcd2 +[root@manager-node1 docker-swarm]# docker service ls +ID NAME MODE REPLICAS IMAGE PORTS +1jst0mva8o5n ivoryhac-etcd_etcd1 replicated 1/1 quay.io/coreos/etcd:v3.5.8 *:2379-2380->2379-2380/tcp +sosag5017cis ivoryhac-etcd_etcd2 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +8twpgkzo2mnx ivoryhac-etcd_etcd3 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +``` +可自定义数据库外挂目录,修改docker-swarm-ivypatroni.yml中的volumes,修改完成后修改目录权限及属主;示例如下 +``` +mkdir -p /home/ivorysql/{data,patroni} +chown -R 1000:1000 /home/ivorysql/{data,patroni} +chmod 700 /home/ivorysql/{data,patroni} +``` + +部署IvorySQL高可用集群 +``` +[root@manager-node1 docker-swarm]# docker stack deploy -c docker-swarm-ivypatroni.yml ivoryhac-patroni +Since --detach=false was not specified, tasks will be created in the background. +In a future release, --detach=false will become the default. +Creating service ivoryhac-patroni_ivypatroni1 +Creating service ivoryhac-patroni_ivypatroni2 +[root@manager-node1 docker-swarm]# docker service ls +ID NAME MODE REPLICAS IMAGE PORTS +1jst0mva8o5n ivoryhac-etcd_etcd1 replicated 1/1 quay.io/coreos/etcd:v3.5.8 *:2379-2380->2379-2380/tcp +sosag5017cis ivoryhac-etcd_etcd2 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +8twpgkzo2mnx ivoryhac-etcd_etcd3 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +uzdvjq5j2gwt ivorysql-hac_ivypatroni1 replicated 1/1 ivorysql/docker-swarm-ha-cluster:5.0-4.0.6-ubi8 *:1521->1521/tcp, *:5866->5866/tcp +fr0m9chu3ce8 ivorysql-hac_ivypatroni2 replicated 1/1 ivorysql/docker-swarm-ha-cluster:5.0-4.0.6-ubi8 *:1522->1521/tcp, *:5867->5866/tcp +``` + +psql连接数据库的Oracle端口及PG端口 +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p1521 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# exit +``` +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p5432 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) +``` + +卸载IvorySQL集群 +``` +[root@manager-node1 ~] docker stack rm ivoryhac-patroni +[root@manager-node1 ~] docker stack rm ivoryhac-etcd +``` + +== docker compose搭建IvorySQL HA Cluster + +下载源码 +``` +[root@manager-node1 ~]# git clone https://github.com/IvorySQL/docker_library.git +[root@manager-node1 ~]# cd docker_library/docker-cluster/docker-compose +``` +拷贝文件至其他服务器 + +将etcd和ivypatroni的docker-compose文件,分别拷贝到其他服务器上。 + +如测试服务器: + +192.168.21.205 存放etcd1+ivorypatroni1, + +192.168.21.164 存放etcd2+ivorypatroni2, + +192.168.21.51 存放etcd3+ivorypatroni3 + +部署一个三节点的etcd,以node1为例 +``` +[root@manager-node1 docker-compose]# docker-compose -f ./docker-compose-etcd1.yml up -d +[+] Running 1/1 + ✔ Container etcd Started 0.1s + +``` + +部署IvorySQL高可用集群 + +在每个节点上,部署ivyhac服务 +以node1为例 +``` +[root@manager-node1 docker-compose]# docker-compose -f ./docker-compose-ivypatroni_1.yml up -d +[+] Running 1/1 + ✔ Container ivyhac1 Started 0.1s +[root@manager-node1 docker-compose]# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +736c0d188bdd ivorysql/docker-compose-ha-cluster:5.0-4.0.6-ubi8 "/bin/sh /docker-ent…" 18 seconds ago Up 17 seconds ivyhac1 +9d8e04e4f819 quay.io/coreos/etcd:v3.5.8 "/usr/local/bin/etcd" 24 minutes ago Up 24 minutes etcd + +``` + +此时,一主两备集群搭建完成 +psql连接数据库的Oracle端口及PG端口 +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p1521 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# exit +``` +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p5432 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +``` + +卸载IvorySQL集群 +以node1为例 +``` +[root@manager-node1 ~] docker-compose -f ./docker-compose-ivypatroni_1.yml down +[root@manager-node1 ~] docker-compose -f ./docker-compose-etcd1.yml down +``` \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/4.6.4.adoc b/CN/modules/ROOT/pages/master/4.6.4.adoc new file mode 100644 index 0000000..2cbbd0b --- /dev/null +++ b/CN/modules/ROOT/pages/master/4.6.4.adoc @@ -0,0 +1,71 @@ + +:sectnums: +:sectnumlevels: 5 + += Docker & Podman 部署IvorySQL + +== docker方式运行 + +** 从Docker Hub上获取IvorySQL镜像 +``` +$ docker pull ivorysql/ivorysql:5.0-ubi8 +``` + +** 运行IvorySQL +``` +$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:5.0-ubi8 +``` + +** 查看IvorySQL容器运行是否成功 +``` +$ docker ps | grep ivorysql +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6faa2d0ed705 ivorysql:5.0-ubi8 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5866/tcp, 0.0.0.0:5434->5432/tcp ivorysql +``` + +== podman方式运行 + +** 从Docker Hub上获取IvorySQL镜像 +``` +[highgo@manager-node1 ~]$ podman pull ivorysql/ivorysql:5.0-ubi8 +✔ docker.io/ivorysql/ivorysql:5.0-ubi8 +Trying to pull docker.io/ivorysql/ivorysql:5.0-ubi8... +Getting image source signatures +Copying blob 5885448c5c88 done | +Copying blob 6c502b378234 done | +Copying blob 8b4f2b90d6b6 done | +Copying blob 9b000f2935f6 done | +Copying blob 806f782da874 done | +Copying blob e4c51845a9eb done | +Copying blob dcb1e9a04275 done | +Copying blob 285a279173f8 done | +Copying blob 1f6f247b9ae0 done | +Copying blob 3cc81bed8614 done | +Copying blob 863c87bf25eb done | +Copying blob 4f4fb700ef54 done | +Copying config 88e1bbeda8 done | +Writing manifest to image destination +88e1bbeda81c51d88e12cbd2b19730498f1343d1c64bb3dddc8ffcb08a1f965f +``` + +** 运行IvorySQL +``` +$ podman run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=123456 -d ivorysql/ivorysql:5.0-ubi8 +``` + +** 查看IvorySQL容器运行是否成功 +``` +[highgo@manager-node1 ~]$ podman ps | grep ivorysql +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +368dee58d5ef docker.io/ivorysql/ivorysql:5.0-ubi8 postgres 20 seconds ago Up 20 seconds 0.0.0.0:5434->5432/tcp, 1521/tcp, 5866/tcp ivorysql + +[highgo@manager-node1 ~]$ podman exec -it ivorysql /bin/bash +[root@8cc631eb413d /]# +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# +``` \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/4.7.1.adoc b/CN/modules/ROOT/pages/master/4.7.1.adoc new file mode 100644 index 0000000..3c8e2b5 --- /dev/null +++ b/CN/modules/ROOT/pages/master/4.7.1.adoc @@ -0,0 +1,384 @@ + + +:sectnums: +:sectnumlevels: 5 +:imagesdir: ./_images + += 安装说明 + +IvorySQL Cloud平台是一个综合性的解决方案,它集成了IvorySQL数据库以及周边生态,以提供全面的数据库和资源管理功能。安装前需要上github编译安装好 + +前端:https://github.com/IvorySQL/ivory-cloud-web + +后端:https://github.com/IvorySQL/ivory-cloud + +搭建好K8S集群,并在集群master节点上安装ivory-operator + +https://github.com/IvorySQL/ivory-operator/tree/IVYO_REL_5_STABLE + +== IvorySQL Cloud平台安装 + +IvorySQL Cloud平台,目前支持Linux系统下的安装,以下为各部分对应的安装包: + +[width="99%",cols="<28%,<72%",options="header"] +|=== +|组件|安装包 +|前端|dist +|后端|cloudnative-1.0-SNAPSHOT.jar +|K8S集群 a| +[arabic] +. docker.io/ivorysql/ivory-operator:v5.0 +. docker.io/ivorysql/pgadmin:ubi8-9.9-5.0-1 +. docker.io/ivorysql/pgbackrest:ubi8-2.56.0-5.0-1 +. docker.io/ivorysql/postgres-exporter:ubi8-0.17.0-5.0-1 +. docker.io/ivorysql/ivorysql:ubi8-5.0-5.0-1 +|=== + +另外,云服务平台还需要用户安装以下组件: + +* *后端数据库*:负责存储和管理所有与云资源、用户信息、权限控制、计费信息等相关的数据。需要使用PG系列数据库,如PostgreSQL、瀚高数据库、IvorySQL等。 +* *NGINX*:支持云服务平台的web服务。 + +== 安装前准备 + +安装前,所有服务器都必须完成以下准备工作。并且将IvorySQL Cloud平台部署在K8S(1.23)的服务器上,且该K8S集群需要有默认的storage class. + +=== 关闭防火墙 + +所有服务器关闭防火墙,以保证他们之间的网络互通。 + +[literal] +---- +systemctl stop firewalld.service + +systemctl disable firewalld.service +---- + +=== 后端部署 + +==== 后端数据库 + +IvorySQL Cloud平台的后端数据库需用户自行安装,请参考IvorySQL官网。 + +==== 后端服务程序 + +===== 编译后端服务程序 + +[literal] +---- +# 克隆代码 + +git clone https://github.com/IvorySQL/ivory-cloud.git + +# 进入到项目根目录 + +cd ivory-cloud +---- + +请确保ivory-cloud\cloudnative\src\main\resources\monitor文件夹,及其所有的子文件夹下的以.sh结尾的文件是unix格式的,如果不是,请执行dos2unix命令转换成unix格式。 + +[literal] +---- +dos2unix cloudnative\src\main\resources\monitor\* + +# 编译 + +mvn clean + +mvn package -D maven.test.skip=true + +打包完成后,可以在 ivory-cloud/cloudnative/target下找到cloudnative-1.0-SNAPSHOT.jar文件 +---- + +===== 部署程序 + +[literal] +---- +在K8S服务器上执行如下操作: + +# 创建目录 + +mkdir -p /home/ivory + +# 将上一步编译好的文件ivory-cloud/cloudnative/target/cloudnative-1.0-SNAPSHOT.jar上传至上述目录 + +# 配置文件 + +## 创建配置目录 + +mkdir -p /home/ivory/config + +## 上传配置文件 + +将源代码ivory-cloud/cloudnative/src/main/resources目录里的如下三个配置文件上传至 /home/ivory/config + +application.yaml + +application-native.yaml + +spring_pro_logback.xml + +## 修改配置文件,请将url、username、password修改为<>安装的数据库的信息。 + +## /home/ivory/config/application-native.yaml + +datasource: + +druid: + +db-type: com.alibaba.druid.pool.DruidDataSource + +driver-class-name: org.postgresql.Driver + +url: jdbc:postgresql://127.0.0.1:5432/ivorysql + +username: ivorysql + +password: "ivory@123" +---- + +==== 启动后端服务程序 + +[literal] +---- +# 安装jdk1.8 + +yum install -y java-1.8.0-openjdk.x86_64 + +[root@cloud ivory]# pwd + +/home/ivory/ + +[root@cloud ivory]# nohup java -jar cloudnative-1.0-SNAPSHOT.jar > log_native 2>&1 & + +[root@cloud ivory]# ps -ef |grep java + +root 77494 1 0 10月09 ? 00:03:07 java -jar cloudnative-1.0-SNAPSHOT.jar +---- + +=== 前端部署 + +==== 编译前端服务程序 + +[literal] +---- +## 获取代码 + +git clone https://github.com/IvorySQL/ivory-cloud-web.git + +## 进入项目根目录 + +cd ivorysql-cloud-web + +## 安装依赖 + +npm install + +## 编译打包 + +npm run build:prod +---- + +==== 修改目录和文件权限 + +[literal] +---- +# 创建目录 + +[root@cloud opt]# mkdir -p /opt/cloud/web + +# 将前端构建后的dist文件夹置于/opt/cloud/web + +# 授权 + +[root@cloud web]# chmod 755 /opt/cloud/web/dist + +[root@cloud web]# chmod -R 777 /opt/cloud/web/dist +---- + +==== 修改config.js + +修改文件 + +[literal] +---- +[root@cloud dist]# pwd + +/home/cloud/web/dist + +[root@cloud dist]# vi config.js + +var PLATFROM_CONFIG = {}; + +// ip请更换为当前服务器地址 + +PLATFROM_CONFIG.baseUrl = "http://192.168.31.43:8081/cloudapi/api/v1" + +//true: need to show "注册" on login page + +//false: don't show "注册" on login page + +globalShowRegister = true + +//是否隐藏云原生数据库, true: 隐藏云原生数据库;false:显示云原生数据库 + +disableNative = false + +// 数据库类型 + +dbtype = "IvorySQL" + +dbversion = "5.0" +---- + +=== 安装部署nginx + +IvorySQL Cloud平台服务器需要安装nginx,以支持云服务平台的web服务。nginx需要用户自行安装,这里提供一种安装方法作为参考。 + +==== 下载nginx安装包 + +[literal] +---- +[root@cloud web]# wget https://nginx.org/download/nginx-1.20.1.tar.gz + +[root@cloud web]# ls -lrt + +总用量 3924 + +-rwxrwxr-x. 1 root root 1061461 5月 25 2021 nginx-1.20.1.tar.gz + +-rwxrwxr-x. 1 root root 2943732 10月 9 16:43 dist.tar.gz + +drwxrwxrwx. 4 root root 103 10月 21 13:20 dist +---- + +==== 安装相关依赖 + +[literal] +---- +[root@host30 cloud]# yum -y install pcre-devel + +[root@host30 cloud]# yum -y install openssl openssl-devel +---- + +==== 编译安装nginx + +nginx会被安装在configure时由--prefix指定的目录下,例如这里的/opt/cloud/nginx: + +[literal] +---- +## 解压缩nginx-1.20.1.tar.gz安装包 + +[root@cloud web]# tar -zxvf nginx-1.20.1.tar.gz + +## 解压后生成nginx-1.20.1文件夹 + +[root@cloud web]# ls -lrt + +总用量 3924 + +-rwxrwxr-x. 1 root root 1061461 5月 25 2021 nginx-1.20.1.tar.gz + +-rwxrwxr-x. 1 root root 2943732 10月 9 16:43 dist.tar.gz + +drwxrwxr-x. 9 1001 1001 186 10月 9 16:53 nginx-1.20.1 + +drwxrwxrwx. 4 root root 103 10月 21 13:20 dist + +## 配置导向 + +[root@cloud web]# cd nginx-1.20.1 + +[root@cloud nginx-1.20.1]# ./configure --prefix=/opt/cloud/nginx --with-http_ssl_module + +## 编译安装 + +[root@cloud nginx-1.20.1]# make + +[root@cloud nginx-1.20.1]# make install +---- + +==== 修改配置文件nginx.conf + +配置文件在/opt/cloud/nginx路径下,可以按照github上readme对nginx.conf进行对比修改。ip请配置为当前服务器的ip。 + +[literal] +---- +server { + +listen 9104; + +server_name 192.168.31.43; + +location / { + +root /opt/cloud/web/dist; + +index index.html index.htm; + +} + +error_page 500 502 503 504 /50x.html; + +location = /50x.html { + +root html; + +} + +} +---- + +==== 启动nginx + +[literal] +---- +[root@cloud sbin]# pwd + +/opt/cloud/nginx/sbin + +[root@cloud sbin]# ./nginx -c /opt/cloud/nginx/conf/nginx.conf + +[root@cloud sbin]# ps -ef | grep nginx + +root 2179 131037 0 09:46 pts/1 00:00:00 grep --color=auto nginx + +root 55047 1 0 10月21 ? 00:00:00 nginx: master process ./nginx -c /opt/cloud/nginx/conf/nginx.conf + +nobody 55048 55047 0 10月21 ? 00:00:00 nginx: worker process +---- + +=== operator部署 + +请自行搭建K8S,此处描述为在K8S集群上安装ivory-operator和load镜像。 + +==== 安装ivory-operator + +参见 + +https://github.com/IvorySQL/ivory-operator/tree/IVYO_REL_5_STABLE[https://github.com/IvorySQL/ivory-operator/tree/IVYO_REL_5_STABLE] + +网站上的readme + +==== load镜像 + +如果服务器可以直接访问到docker hub,可以跳过该章节。否则需要在所有的K8S集群节点提前load 如下docker镜像 + +[literal] +---- +docker.io/ivorysql/pgadmin:ubi8-9.9-5.0-1 + +docker.io/ivorysql/pgbackrest:ubi8-2.56.0-5.0-1 + +docker.io/ivorysql/pgbouncer:ubi8-1.23.0-5.0-1 + +docker.io/ivorysql/postgres-exporter:ubi8-0.17.0-5.0-1 + +docker.io/ivorysql/ivorysql:ubi8-5.0-5.0-1 + +docker.io/prom/prometheus:v2.33.5 + +docker.io/prom/alertmanager:v0.22.2 + +docker.io/grafana/grafana:8.5.10 +---- \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/4.7.2.adoc b/CN/modules/ROOT/pages/master/4.7.2.adoc new file mode 100644 index 0000000..6642c3f --- /dev/null +++ b/CN/modules/ROOT/pages/master/4.7.2.adoc @@ -0,0 +1,244 @@ + +:sectnums: +:sectnumlevels: 5 +:imagesdir: ./_images + += 使用说明 + +IvorySQL Cloud是一个基于Web的服务平台,用户可以在任意电脑上通过浏览器进行访问,安装云服务平台的服务器IP为192.168.31.43,然后在浏览器输入http://192.168.31.43:9104/(9104为前端nginx.conf.default中设置的端口)即可进入登录页面页面: + +image::media/image3.png[image3,width=274,height=355] + +== 登录和退出 + +=== 用户登录 + +进入登录页面,根据提示输入信息即可访问IvorySQL云服务平台: + +image::media/image4.png[image4,width=552,height=272] + +=== 退出登录 + +点击页面右上角的头像,会显示当前登录用户名和“Log Out”。点击“Log Out”退出当前登录页面。点击用户名则保留当前登录页面: + +image::media/image5.png[image5,width=552,height=62] + +== 管理员功能 + +=== 添加集群 + +[arabic] +. 平台登录admin用户后,点击左侧菜单栏的“K8S集群管理”按钮,进入集群管理页。 + +image::media/image6.png[image6,width=601,height=91] + +[arabic, start=2] +. 点击页面左上角“增加Kubernetes集群”按钮,输入集群信息后提交。 + +image::media/image7.png[image7,width=333,height=291] + +=== 管理集群 + +在集群管理页面,可以查看已添加集群的详细信息,同时对集群信息进行编辑和删除操作。 + +image::media/image8.png[image8,width=553,height=82] + +== demo用户功能 + +=== 数据库订阅 + +[arabic] +. 使用demo用户登录平台。 + +. 点击左侧菜单栏"数据库订阅"按钮,填写需要申请的数据库的各个参数,然后点击"下一步:确认信息"。 + +image::media/image9.png[image9,width=552,height=272] + +[arabic, start=3] +. 检查信息后,点击"确定" + +image::media/image10.png[image10,width=552,height=272] + +[arabic, start=4] +. "确定"后,会自动跳转到"数据库管理"页面,查看订阅任务 + +image::media/image11.png[image11,width=552,height=77] + +image::media/image12.png[image12,width=552,height=79] + +=== 数据库管理 + +显示云服务平台管理的数据库信息。 + +image::media/image12.png[image12,width=552,height=79] + +=== 数据库重启 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"重启"选项。 + +image::media/image13.png[image13,width=79,height=286] + +[arabic, start=3] +. 检查信息后,点"确认" + +image::media/image14.png[image14,width=553,height=210] + +=== 修改密码 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"数据库管理",选中数据库,点击"实例ID"列。 + +image::media/image15.png[image15,width=553,height=48] + +[arabic, start=3] +. 进入数据库详情页面,点击"修改密码"列的图标。 + +image::media/image17.png[image17,width=553,height=173] + +[arabic, start=4] +. 输入新的数据库密码,点击"确定"按钮。 + +image::media/image18.png[image18,width=553,height=352] + +=== 删除实例 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"删除实例"选项。 + +image::media/image19.png[image19,width=55,height=201] + +[arabic, start=3] +. 检查信息后,点击"确定"按钮。 + +image::media/image20.png[image20,width=552,height=207] + +=== 磁盘扩容 + +[arabic] +. 此功能需用户自行配置相关插件,如topolvm +. demo用户登录平台。 +. 点击左侧导航栏"磁盘扩容",选中数据库,点击"操作"列的"修改"按钮;或者点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"磁盘扩容"选项。 + +image::media/image21.png[image21,width=552,height=197] + +image::media/image22.png[image22,width=63,height=200] + +[arabic, start=4] +. 检查并输入磁盘扩容的大小后,点击"确定"按钮。 + +image::media/image23.png[image23,width=553,height=242] + +=== 规格变更 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"规格变更",选中数据库,点击"操作"列的"修改"按钮;或者点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"规格变更"选项。 + +image::media/image24.png[image24,width=552,height=196] + +image::media/image25.png[image25,width=59,height=205] + +[arabic, start=3] +. 检查信息并输入规格变更的大小后,点击"确定"按钮。 + +image::media/image26.png[image26,width=552,height=240] + +=== 数据库备份 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"数据库备份",选中数据库,点击"操作"列的"备份";或者点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"备份"选项。 + +image::media/image27.png[image27,width=552,height=197] + +image::media/image28.png[image28,width=64,height=199] + +[arabic, start=3] +. 检查信息并输入备份名称后,点击"确定"按钮。 + +image::media/image29.png[image29,width=552,height=285] + +=== 数据库恢复 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"数据库恢复",选中数据库,点击"操作"列的"查看"按钮;或者点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"恢复"选项。 + +image::media/image30.png[image30,width=552,height=196] + +image::media/image31.png[image31,width=58,height=201] + +[arabic, start=3] +. 选择需要备份的文件后,点"操作"列的"恢复" + +image::media/image32.png[image32,width=552,height=304] + +image::media/image33.png[image33,width=552,height=305] + +[arabic, start=4] +. 输入需要恢复的数据库信息,密码为备份前的数据库密码 + +image::media/image34.png[image34,width=552,height=246] + +[arabic, start=5] +. 后续参考"4.1数据库订阅"的步骤。 + +=== 数据库监控 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"监控工具",选择数据库所在的集群;或者点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,在下拉选项中选择"监控"选项。 + +image::media/image35.png[image35,width=552,height=261] + +image::media/image36.jpeg[image36,width=65,height=215] + +[arabic, start=3] +. 等待监控创建完毕后,再次重复(2)中操作,跳转新页面,输入用户名:admin,密码admin,点击login登录 + +____ +image::media/image37.png[image37,width=552,height=272] +____ + +[arabic, start=4] +. 选择放大镜图标,点击查看监控指标。 + +image::media/image39.png[image39,width=553,height=264] + +image::media/image40.png[image40,width=552,height=261] + +image::media/image41.png[image41,width=553,height=261] + +image::media/image42.png[image42,width=552,height=72] + +image::media/image43.png[image43,width=552,height=259] + +image::media/image44.png[image44,width=552,height=280] + +=== 可视化登录工具 + +[arabic] +. demo用户登录平台。 + +. 点击左侧导航栏"数据库管理",选中数据库,点击"操作"列的"更多"按钮,选择"登录"选项。 + +image::media/image45.jpeg[image45,width=65,height=205] + +[arabic, start=3] +. 跳转新页面后,输入数据库用户名 sysdba@ivyo.com 及数据库密码,点击"Login"按钮。 + +image::media/image46.png[image46,width=383,height=263] + +[arabic, start=4] +. 连接数据库,即可进行操作。 \ No newline at end of file diff --git a/CN/modules/ROOT/pages/master/4.7.adoc b/CN/modules/ROOT/pages/master/4.7.adoc new file mode 100644 index 0000000..e69de29 diff --git a/CN/modules/ROOT/pages/master/5.0.adoc b/CN/modules/ROOT/pages/master/5.0.adoc index e1b1273..4fe9f8b 100644 --- a/CN/modules/ROOT/pages/master/5.0.adoc +++ b/CN/modules/ROOT/pages/master/5.0.adoc @@ -7,21 +7,20 @@ IvorySQL 作为一款兼容 Oracle 且基于 PostgreSQL 的高级开源数据库,具备强大的扩展能力,支持丰富的生态系统插件。这些插件可以帮助用户在不同场景下增强数据库功能,包括地理信息处理、向量检索、全文搜索、数据定义提取和路径规划等。以下是当前 IvorySQL 官方兼容和支持的主要插件列表: -+ -[cols="2,1,3,3"] +[cols="1,2,1,3,3"] |==== -|*插件名称*|*版本*|*功能描述*|*适用场景* -| xref:master/5.1.adoc[postgis] | 3.5.4 | 为 IvorySQL 提供地理空间数据支持,包括空间索引、空间函数和地理对象存储 | 地理信息系统(GIS)、地图服务、位置数据分析 -| xref:master/5.2.adoc[pgvector] | 0.8.1 | 支持向量相似性搜索,可用于存储和检索高维向量数据| AI 应用、图像检索、推荐系统、语义搜索 -| xref:master/5.3.adoc[pgddl (DDL Extractor)] | 0.31 | 提取数据库中的 DDL(数据定义语言)语句,便于版本管理和迁移 | 数据库版本控制、CI/CD 集成、结构比对与同步 -| xref:master/5.4.adoc[pg_cron]​ | 1.6.0 | 提供数据库内部的定时任务调度功能,支持定期执行SQL语句 | 数据清理、定期统计、自动化维护任务 -| xref:master/5.5.adoc[pgsql-http]​ | 1.7.0 | 允许在SQL中发起HTTP请求,与外部Web服务进行交互 | 数据采集、API集成、微服务调用 -| xref:master/5.6.adoc[plpgsql_check] | 2.8 | 提供PL/pgSQL代码的静态分析功能,可在开发阶段发现潜在错误 | 存储过程开发、代码质量检查、调试优化 -| xref:master/5.7.adoc[pgroonga] | 4.0.4 | 提供​非英语语言全文搜索功能,满足高性能应用的需求 | 中日韩等语言的全文搜索功能 -| xref:master/5.8.adoc[pgaudit] | 18.0 | 提供细粒度的审计功能,记录数据库操作日志,便于安全审计和合规性检查 | 数据库安全审计、合规性检查、审计报告生成 -| xref:master/5.9.adoc[pgrouting] | 3.8.0 | 提供地理空间数据的路由计算功能,支持多种算法和数据格式 | 地理空间分析、路径规划、物流优化 -| xref:master/5.10.adoc[system_stats] | 3.2 | 提供用于访问系统级统计信息的函数 | 系统监控 +|*序号*|*插件名称*|*版本*|*功能描述*|*适用场景* +| 1 | xref:master/5.1.adoc[postgis] | 3.5.4 | 为 IvorySQL 提供地理空间数据支持,包括空间索引、空间函数和地理对象存储 | 地理信息系统(GIS)、地图服务、位置数据分析 +| 2 | xref:master/5.2.adoc[pgvector] | 0.8.1 | 支持向量相似性搜索,可用于存储和检索高维向量数据| AI 应用、图像检索、推荐系统、语义搜索 +| 3 | xref:master/5.3.adoc[pgddl (DDL Extractor)] | 0.31 | 提取数据库中的 DDL(数据定义语言)语句,便于版本管理和迁移 | 数据库版本控制、CI/CD 集成、结构比对与同步 +| 4 | xref:master/5.4.adoc[pg_cron]​ | 1.6.0 | 提供数据库内部的定时任务调度功能,支持定期执行SQL语句 | 数据清理、定期统计、自动化维护任务 +| 5 | xref:master/5.5.adoc[pgsql-http]​ | 1.7.0 | 允许在SQL中发起HTTP请求,与外部Web服务进行交互 | 数据采集、API集成、微服务调用 +| 6 | xref:master/5.6.adoc[plpgsql_check] | 2.8 | 提供PL/pgSQL代码的静态分析功能,可在开发阶段发现潜在错误 | 存储过程开发、代码质量检查、调试优化 +| 7 | xref:master/5.7.adoc[pgroonga] | 4.0.4 | 提供​非英语语言全文搜索功能,满足高性能应用的需求 | 中日韩等语言的全文搜索功能 +| 8 | xref:master/5.8.adoc[pgaudit] | 18.0 | 提供细粒度的审计功能,记录数据库操作日志,便于安全审计和合规性检查 | 数据库安全审计、合规性检查、审计报告生成 +| 9 | xref:master/5.9.adoc[pgrouting] | 3.8.0 | 提供地理空间数据的路由计算功能,支持多种算法和数据格式 | 地理空间分析、路径规划、物流优化 +| 10 | xref:master/5.10.adoc[system_stats] | 3.2 | 提供用于访问系统级统计信息的函数 | 系统监控 |==== 这些插件均经过 IvorySQL 团队的测试和适配,确保在 IvorySQL 环境下稳定运行。用户可以根据业务需求选择合适的插件,进一步提升数据库系统的能力和灵活性。 diff --git a/CN/modules/ROOT/pages/master/5.1.adoc b/CN/modules/ROOT/pages/master/5.1.adoc index a8a4ab1..0329399 100644 --- a/CN/modules/ROOT/pages/master/5.1.adoc +++ b/CN/modules/ROOT/pages/master/5.1.adoc @@ -8,7 +8,7 @@ IvorySQL原生100%兼容PostgreSQL,因此,PostGIS可以完美适配IvorySQL。 == 安装 -根据开发环境,用户可从 https://postgis.net/documentation/getting_started/#installing-postgis[PostGIS安装] 页面选择适合自己的方式进行安装PostGIS安装。 +根据开发环境,用户可从 https://postgis.net/documentation/getting_started/#installing-postgis[PostGIS安装] 页面选择适合自己的方式进行安装PostGIS。 === 源码安装 除PostGIS社区提供的安装方式以外,IvorySQL社区也提供了源码安装方式,源码安装环境为 Ubuntu 24.04(x86_64)。 @@ -41,7 +41,7 @@ sudo apt install \ $ wget https://download.osgeo.org/postgis/source/postgis-3.5.4.tar.gz $ tar xvf postgis-3.5.4.tar.gz $ cd postgis-3.5.4 -$ ./configure --with-pgconfig=/path/to/pg_config eg: /opt/IvorySQL-5/bin/pg_config,如果ivorysql安装目录在/opt/IvorySQL-5. +$ ./configure --with-pgconfig=/path/to/pg_config eg: /usr/ivory-5/bin/pg_config,如果ivorysql安装目录在/usr/ivory-5. $ make $ sudo make install ---- diff --git a/CN/modules/ROOT/pages/master/5.2.adoc b/CN/modules/ROOT/pages/master/5.2.adoc index fa7e298..c8bb7c8 100644 --- a/CN/modules/ROOT/pages/master/5.2.adoc +++ b/CN/modules/ROOT/pages/master/5.2.adoc @@ -117,7 +117,7 @@ NOTICE: [4,5,6] CALL ---- -==== 函数(FUNCTION) +=== 函数(FUNCTION) [literal] ---- ivorysql=# CREATE OR REPLACE FUNCTION AddVector(a vector(3), b vector(3)) diff --git a/CN/modules/ROOT/pages/master/6.1.1.adoc b/CN/modules/ROOT/pages/master/6.1.1.adoc index f9934dd..a14bf6c 100644 --- a/CN/modules/ROOT/pages/master/6.1.1.adoc +++ b/CN/modules/ROOT/pages/master/6.1.1.adoc @@ -12,7 +12,7 @@ 基本做法是新增一套兼容Oracle风格的语法和词法,在开启Oracle兼容的情况下,走Oracle风格的语法分析,生成相应的语法树。 具体方法: -在src/backend/下面,新建一个oracle_parser目录,将src/backend/parser/下的scan.l和gram.y复制到该目录下,改名成ora_gram.y和ora_scan.l,添加 Oracle风格的语法和此法分析代码,同时复制keywords.c到该目录下,用来存放自己的关键字。该oracle_parser目录编译成一个动态库 libparser_oracle.so。当开启Oracle兼容的时候,配置文件ivorysql.conf被嵌入到postgresql.conf文件的末尾。配置文件ivorysql.conf中的shared_preload_libraries参数中添加“liboracle_parser”,这样当数据库启动时能够自动载入liboracle_parser动态库。 +在src/backend/下面,新建一个oracle_parser目录,将src/backend/parser/下的scan.l和gram.y复制到该目录下,改名成ora_gram.y和ora_scan.l,添加 Oracle风格的语法和词法分析代码,同时复制keywords.c到该目录下,用来存放自己的关键字。该oracle_parser目录编译成一个动态库 libparser_oracle.so。当开启Oracle兼容的时候,配置文件ivorysql.conf被嵌入到postgresql.conf文件的末尾。配置文件ivorysql.conf中的shared_preload_libraries参数中添加“liboracle_parser”,这样当数据库启动时能够自动载入liboracle_parser动态库。 新增ora_raw_parser 函数指针,当libparser_oracle.so动态库被加载时,该动态库中的 _PG_init() 函数将 oracle_raw_parser() 函数的地址赋值给 ora_raw_parser,_PG_fini()则在兼容模式切换时负责重置 ora_raw_parser 为空。 diff --git a/CN/modules/ROOT/pages/master/6.2.1.adoc b/CN/modules/ROOT/pages/master/6.2.1.adoc index f84a790..db533e3 100644 --- a/CN/modules/ROOT/pages/master/6.2.1.adoc +++ b/CN/modules/ROOT/pages/master/6.2.1.adoc @@ -69,7 +69,7 @@ Oracle 模式下额外创建 ivorysql.conf 配置文件。 执行 bootstrap_template1() 加载对应模式的 BKI 文件初始化template1模板数据库, -IvorySQL会额外设置template1模板数据库的数据库模式(oracle/pg)和大小写转换模式以。 +IvorySQL会额外设置template1模板数据库的数据库模式(oracle/pg)和大小写转换模式。 load_plisql():安装兼容 Oracle PL/SQL 的 PL/iSQL 过程语言 diff --git a/CN/modules/ROOT/pages/master/6.3.1.adoc b/CN/modules/ROOT/pages/master/6.3.1.adoc index 4df8a91..04d0189 100644 --- a/CN/modules/ROOT/pages/master/6.3.1.adoc +++ b/CN/modules/ROOT/pages/master/6.3.1.adoc @@ -9,7 +9,7 @@ Oracle和IvorySQL中的 LIKE 语法是相同的,他们的区别在于表达式 == 实现原理 -PostgreSQL的字符串基本类型是text,所以 LIKE 是以text为基础,其他PostgreSQL类型隐式转换成text,不用创建opeartor就能自动转换;IvorySQL中oracle兼容的字符串类型是varchar2,因此创建一个varchar2的 LIKE 操作符,其他oracle的类型也通过隐式转换成varchar2实现不用创建操作符,也能使用 LIKE 操作符。 +PostgreSQL的字符串基本类型是text,所以 LIKE 是以text为基础,其他PostgreSQL类型隐式转换成text,不用创建operator就能自动转换;IvorySQL中oracle兼容的字符串类型是varchar2,因此创建一个varchar2的 LIKE 操作符,其他oracle的类型也通过隐式转换成varchar2实现不用创建操作符,也能使用 LIKE 操作符。 在之前实现oracle兼容数据类型时,IvorySQL做了integer,float8,float4 等一些数据类型到varchar2的隐式转换,没有直接到text的。因此实现这些兼容类型的 LIKE 操作符兼容,有两种方式。一种需要针对每个类型添加一个 LIKE 操作符,另一种是做一个基本的varchar2的 LIKE 操作符。在第二种实现方式中,IvorySQL针对float8,integer,number等已经做了向varchar2类型的隐式转换,这些数据类型可以和varchar2用同一个操作符,这样在创建操作符的时候只需要创建varchar2类型的 LIKE 操作符就可以。 diff --git a/CN/modules/ROOT/pages/master/6.3.12.adoc b/CN/modules/ROOT/pages/master/6.3.12.adoc index 6b9480b..96d405b 100644 --- a/CN/modules/ROOT/pages/master/6.3.12.adoc +++ b/CN/modules/ROOT/pages/master/6.3.12.adoc @@ -129,7 +129,7 @@ typedef enum IvyStmtType { IVY_STMT_UNKNOW, IVY_STMT_DO, - IVY_STMT_DOFROMCALL, /* new statementt ype */ + IVY_STMT_DOFROMCALL, /* new statementt type */ IVY_STMT_DOHANDLED, IVY_STMT_OTHERS } IvyStmtType; diff --git a/CN/modules/ROOT/pages/master/6.3.3.adoc b/CN/modules/ROOT/pages/master/6.3.3.adoc index a4b9952..5cb4141 100644 --- a/CN/modules/ROOT/pages/master/6.3.3.adoc +++ b/CN/modules/ROOT/pages/master/6.3.3.adoc @@ -12,7 +12,7 @@ IvorySQL提供了兼容Oracle RowID的功能。RowID是一种伪列,在创建 RowID 应当具备以下特性: |==== -| 1. 逻辑的标识每一行,且值唯一 +| 1. 逻辑地标识每一行,且值唯一 | 2. 可以通过ROWID快速查询和修改表的其他列,自身不能被插入和修改 | 3. 用户可以控制是否开启此功能 |==== @@ -21,7 +21,7 @@ RowID 应当具备以下特性: 在IvorySQL中系统列 ctid 字段代表了数据行在表中的物理位置,也就是行标识(tuple identifier),由一对数值组成(块编号和行索引),可以通过ctid快速的查找表中的数据行,这样和Oracle的RowID行为很相似,但是ctid值有可能会改变(例如当update/ vacuum full时),因此ctid不适合作为一个长期的行标识。 -我们选择了表的oid加一个序列值组成的复合类型来做为RowID值,其中的序列是系统列。如果RowID功能被开启,则在建表的同时创建一个名为table-id_rowid_seq 的序列。同时在heap_form_tuple构造函数中,为 HeapTupleHeaderData 的长度增加8个字节,并标识td->t_infomask = HEAP_HASROWID 位来表示rowid的存在。 +我们选择了表的oid加一个序列值组成的复合类型来作为RowID值,其中的序列是系统列。如果RowID功能被开启,则在建表的同时创建一个名为table-id_rowid_seq 的序列。同时在heap_form_tuple构造函数中,为 HeapTupleHeaderData 的长度增加8个字节,并标识td->t_infomask = HEAP_HASROWID 位来表示rowid的存在。 在开启了ROWID的GUC参数或建表时带上 WITH ROWID 选项,或对普通表执行 ALTER TABLE … SET WITH ROWID 时会通过增加序列创建命令来创建一个序列。 ``` @@ -44,7 +44,7 @@ RowID 应当具备以下特性: 同时为了快速通过RowID伪列查询到一行数据,默认会在表的RowID列上创建一个UNIQUE索引,以提供快速查询功能。 -RowID列做为系统属性列其实现是通过在 heap.c 中新增一个系统列来实现的。 +RowID列作为系统属性列其实现是通过在 heap.c 中新增一个系统列来实现的。 ``` /* * Compatible Oracle ROWID pseudo column. diff --git a/CN/modules/ROOT/pages/master/6.3.4.adoc b/CN/modules/ROOT/pages/master/6.3.4.adoc index c4f75dc..c134d97 100644 --- a/CN/modules/ROOT/pages/master/6.3.4.adoc +++ b/CN/modules/ROOT/pages/master/6.3.4.adoc @@ -87,7 +87,7 @@ values_clause_no_parens: === UPDATE语句增强 -在UPDATE语句做transform时候,也就是transformUpdateStmt的时候,如果是Oralce兼容模式,调用新添加的 transformIvyUpdateTargetList 函数。 +在UPDATE语句做transform时候,也就是transformUpdateStmt的时候,如果是Oracle兼容模式,调用新添加的 transformIvyUpdateTargetList 函数。 在这个新函数中,对于参数origTlist(即targetList)中没有名字为row的情况,按原来UPDATE的transform流程执行 transformUpdateTargetList 函数。 对参数origTlist中有名字为row的情况:因为PostgreSQL中row可以作为列名,而Oracle 中row是保留关键字,不可以作为列名,所以需要区分row是否是表中的列,如果row不是要更新的表中的列,则调用新函数 transformUpdateRowTargetList 把语句 diff --git a/CN/modules/ROOT/pages/master/6.3.5.adoc b/CN/modules/ROOT/pages/master/6.3.5.adoc index 9c3bd2a..0f4fe93 100644 --- a/CN/modules/ROOT/pages/master/6.3.5.adoc +++ b/CN/modules/ROOT/pages/master/6.3.5.adoc @@ -16,7 +16,7 @@ IvorySQL提供了兼容Oracle的NLS参数功能,包含如下参数。 |nls_timestamp_format | 兼容Oracle的同名参数,控制带时间的日期格式。 |nls_timestamp_tz_format | 兼容Oracle的同名参数,控制带时区的日期时间格式。 |nls_territory | 兼容Oracle的同名参数,指定数据库的默认区域。 -|nls_iso_currency | 兼容Oracle的同名参数,指定的国家和区域制定唯一的货币符号。 +|nls_iso_currency | 兼容Oracle的同名参数,指定国家和区域对应的唯一货币符。 |nls_currency | 兼容Oracle的同名参数,指定显示本地货币的符号,对应数字字符串格式中占位符L。 |==== @@ -131,7 +131,7 @@ oratimestamptz_in() === GUC参数 `nls_date_format`/`nls_timestamp_format`/`nls_timestamp_tz_format` -这三个GUC参数,在函数 `ora_do_to_timestamp()` 中做为格式字符串,对输入的字符串进行格式检查与模式识别。 +这三个GUC参数,在函数 `ora_do_to_timestamp()` 中作为格式字符串,对输入的字符串进行格式检查与模式识别。 下面是其默认值,可以通过设置其值为"pg"使其失效。"pg"表示禁用NLS特定行为,恢复为PostgreSQL的默认行为。 ```c diff --git a/CN/modules/ROOT/pages/master/6.3.6.adoc b/CN/modules/ROOT/pages/master/6.3.6.adoc index e9be67a..9c03da8 100644 --- a/CN/modules/ROOT/pages/master/6.3.6.adoc +++ b/CN/modules/ROOT/pages/master/6.3.6.adoc @@ -68,7 +68,7 @@ Oracle的sqlplus工具使用斜线(/)来结束函数和存储过程,IvorySQL ECHO; ``` -Psql工具需要检测斜线/的含义,避免将注释等部分的斜线判定为结束符,为此在oracle_fe_utils/ora_psqlscan.l文件中增加一个单独的接口is_oracle_slash来检测: +避免将注释等场景中的斜线误判为结束符,为此在 oracle_fe_utils/ora_psqlscan.l 文件中新增 is_oracle_slash 函数接口用于检测: ``` bool is_oracle_slash(PsqlScanState state, const char *line) diff --git a/CN/modules/ROOT/pages/master/6.3.9.adoc b/CN/modules/ROOT/pages/master/6.3.9.adoc index 89a8a8b..b1f8adf 100644 --- a/CN/modules/ROOT/pages/master/6.3.9.adoc +++ b/CN/modules/ROOT/pages/master/6.3.9.adoc @@ -11,7 +11,7 @@ == 实现说明 -如果在数据库初始化时附加了参数 `-C`,值可以为 `normal/interchange/lowercase`,则代码中 `Intidb.c->main()` 函数会处理该参数,根据参数值设置全局变量 `caseswitchmode`。然后 `initdb` 命令会以 `-boot` 模式启动一个 `psotgres` 进程用于设置 `template1` 模板数据库,同时赋予参数 `-C ivorysql.identifier_case_switch=caseswitchmode` 给新进程。 +如果在数据库初始化时附加了参数 `-C`,值可以为 `normal/interchange/lowercase`,则代码中 `Initdb.c->main()` 函数会处理该参数,根据参数值设置全局变量 `caseswitchmode`。然后 `initdb` 命令会以 `-boot` 模式启动一个 `postgres` 进程用于设置 `template1` 模板数据库,同时赋予参数 `-C ivorysql.identifier_case_switch=caseswitchmode` 给新进程。 这个新启动的后端进程会通过下面的代码将 `identifier_case_switch` 信息写入 `pg_control` 文件: diff --git a/CN/modules/ROOT/pages/master/7.10.adoc b/CN/modules/ROOT/pages/master/7.10.adoc index 87b8cce..30574fb 100644 --- a/CN/modules/ROOT/pages/master/7.10.adoc +++ b/CN/modules/ROOT/pages/master/7.10.adoc @@ -7,7 +7,7 @@ 在Oracle中,常会出现带有XML函数的SQL代码,IvorySQL在PostgreSQL的基础上,实现与Oracle XML函数的高度兼容,确保了从Oracle迁移到IvorySQL后的数据格式和结构的一致性。这种兼容性意味着用户无需对现有的XML处理逻辑进行大规模修改,从而保证了数据的完整性和准确性。此外,IvorySQL的这种跨平台的兼容特性,也降低了因格式差异带来的额外用户维护和升级成本,使得数据处理和管理更加高效、可靠和灵活。 [TIP] ==== -XML(eXtended Markup Language可扩展标记语言)是一种基于文本的,用于结构化任何可标记文档的格式语言。它是一种轻便的,可扩展的,标准的且简学易懂的保存数据的语言。 +XML(eXtended Markup Language可扩展标记语言)是一种基于文本的,用于结构化任何可标记文档的格式语言。它是一种轻便的,可扩展的,标准的且简单易懂的保存数据的语言。 ==== == 实现原理 diff --git a/CN/modules/ROOT/pages/master/7.14.adoc b/CN/modules/ROOT/pages/master/7.14.adoc index 4e483b7..cfe7bfc 100644 --- a/CN/modules/ROOT/pages/master/7.14.adoc +++ b/CN/modules/ROOT/pages/master/7.14.adoc @@ -12,7 +12,7 @@ IvorySQL提供了兼容Oracle RowID的功能。RowID是一种伪列,在创建 RowID 应当具备以下特性: |==== -| 1. 逻辑的标识每一行,且值唯一 +| 1. 逻辑地标识每一行,且值唯一 | 2. 可以通过ROWID快速查询和修改表的其他列,自身不能被插入和修改 | 3. 用户可以控制是否开启此功能 |==== diff --git a/CN/modules/ROOT/pages/master/7.15.adoc b/CN/modules/ROOT/pages/master/7.15.adoc index 0de50f9..be7e0e5 100644 --- a/CN/modules/ROOT/pages/master/7.15.adoc +++ b/CN/modules/ROOT/pages/master/7.15.adoc @@ -48,11 +48,11 @@ CREATE [ OR REPLACE ] FUNCTION 支持冒号占位符形式的绑定变量,例如: 1,:name。 -新增新的DO+USING语法: DO [ LANGUAGE lang_name ] code [USING IN | OUT | IN OUT, ...] +新增DO+USING语法: DO [ LANGUAGE lang_name ] code [USING IN | OUT | IN OUT, ...] 支持在libpq中按位置和按参数名字绑定变量,提供系统函数get_parameter_descr,该函数根据SQL语句,返回变量名字与位置的关系。 -=== ibpq中调用含out参数的函数 +=== libpq中调用含out参数的函数 libpq接口端提供准备、绑定、执行函数,这些函数与OCI相应函数类似。 diff --git a/CN/modules/ROOT/pages/master/7.17.adoc b/CN/modules/ROOT/pages/master/7.17.adoc index 09e3e29..84e9193 100644 --- a/CN/modules/ROOT/pages/master/7.17.adoc +++ b/CN/modules/ROOT/pages/master/7.17.adoc @@ -24,7 +24,7 @@ IvorySQL提供兼容Oracle的NLS参数功能。 |nls_timestamp_format | 兼容Oracle的同名参数,控制带时间的日期格式。 |nls_timestamp_tz_format | 兼容Oracle的同名参数,控制带时区的日期时间格式。 |nls_territory | 兼容Oracle的同名参数,指定数据库的默认区域。 -|nls_iso_currency | 兼容Oracle的同名参数,指定的国家和区域制定唯一的货币符号。 +|nls_iso_currency | 兼容Oracle的同名参数,指定国家和区域对应的唯一货币符。 |nls_currency | 兼容Oracle的同名参数,指定显示本地货币的符号,对应数字字符串格式中占位符L。 |==== diff --git a/CN/modules/ROOT/pages/master/7.2.adoc b/CN/modules/ROOT/pages/master/7.2.adoc index 11aee8f..a6f3d0c 100644 --- a/CN/modules/ROOT/pages/master/7.2.adoc +++ b/CN/modules/ROOT/pages/master/7.2.adoc @@ -9,7 +9,7 @@ 为了兼容Oracle,需要在原有的GUC变量基础之上增加一些用于控制数据库执行结果的变量,以达到和Oracle行为一致的目的。 -为了以后更好的添加兼容的guc参数,以及为了更少的改动pg内核源码,我们需要设计一个框架将guc添加到一个统一的地方。 +为了更好地添加兼容的guc参数,以及为了更少地改动pg内核源码,我们需要设计一个框架将guc添加到一个统一的地方。 === 实现 diff --git a/CN/modules/ROOT/pages/master/7.4.adoc b/CN/modules/ROOT/pages/master/7.4.adoc index 69b7fce..29b59b7 100644 --- a/CN/modules/ROOT/pages/master/7.4.adoc +++ b/CN/modules/ROOT/pages/master/7.4.adoc @@ -17,7 +17,7 @@ == 功能 -- Initdb -m 初始化,需要判断不同的模式,其中Oracle模式下,需要执行postgres_oracle.bki的SQL语句; +- initdb -m 初始化,需要判断不同的模式,其中Oracle模式下,需要执行postgres_oracle.bki的SQL语句; - 启动时会根据初始化模式,判断是否为oracle兼容模式。 ``` diff --git a/CN/modules/ROOT/pages/master/7.5.adoc b/CN/modules/ROOT/pages/master/7.5.adoc index 890b3f1..60ab8b5 100644 --- a/CN/modules/ROOT/pages/master/7.5.adoc +++ b/CN/modules/ROOT/pages/master/7.5.adoc @@ -13,7 +13,7 @@ |==== |数据库名称|like模糊查询 |oracle|oracle的字符串类型是varchar2,支持对数字、日期、字符串字段类型的列用Like关键字配合通配符来实现模糊查询 -|IvorySQL|IvorySQL的字符串基本类型是text,所以like是以text为基础上,其他IvorySQL的类型能隐式转换成text,这样不用创建opeartor就能自动转换 +|IvorySQL|IvorySQL的字符串基本类型是text,所以like是以text为基础上,其他IvorySQL的类型能隐式转换成text,这样不用创建operator就能自动转换 |==== == 测试用例 diff --git a/CN/modules/ROOT/pages/master/7.8.adoc b/CN/modules/ROOT/pages/master/7.8.adoc index 824357f..9c3d170 100644 --- a/CN/modules/ROOT/pages/master/7.8.adoc +++ b/CN/modules/ROOT/pages/master/7.8.adoc @@ -604,7 +604,7 @@ select regexp_replace('01234abcd56789','012','xxx')from dual; ``` === `regexp_substr` 函数 -功能:拾取合符正则表达式描述的字符子串,支持参数:text, text,integer /text, text, integer, integer/ text, text, integer, integer, text /varchar2 ,varchar2,测试用例如下: +功能:拾取符合正则表达式描述的字符子串,支持参数:text, text,integer /text, text, integer, integer/ text, text, integer, integer, text /varchar2 ,varchar2,测试用例如下: 查询'012ab34'中从第一个数开始的012字串: ``` @@ -1014,7 +1014,7 @@ select sessiontimezone() from dual; (1 row) ``` -修改timezone后,查看时区相信信息: +修改timezone后,查看时区详细信息: ``` set timezone = 'Asia/Hong_Kong'; @@ -1052,7 +1052,7 @@ select uid() from dual; === `USERENV` 函数 功能:返回当前用户环境的信息,测试用例如下: -查看当前用户是否是dba,如果是返回ture: +查看当前用户是否是dba,如果是返回true: ``` select userenv('isdba')from dual; diff --git a/CN/modules/ROOT/pages/master/8.2.adoc b/CN/modules/ROOT/pages/master/8.2.adoc index 1d23048..fd9add9 100644 --- a/CN/modules/ROOT/pages/master/8.2.adoc +++ b/CN/modules/ROOT/pages/master/8.2.adoc @@ -52,7 +52,7 @@ IvorySQL文档是用“asciidoc”编写的。为确保格式的质量和一致 ​21、链接文本两边禁止出现多余的空格。如不能出现 https://www.example.com/[某链接]。 -​22、链接必须有链接路径。如不能出现空连接等情况。 +​22、链接必须有链接路径。如不能出现空链接等情况。 == 示例 @@ -166,7 +166,7 @@ Some more text = Another top-level heading ``` -正确释放 +正确示范 ``` = Title @@ -225,7 +225,7 @@ Some text here Some more text here ``` -正确释放: +正确示范: ``` Some text here @@ -431,7 +431,7 @@ some text -20、链接必须有链接路径。如不能出现空连接等情况。 +20、链接必须有链接路径。如不能出现空链接等情况。 错误示范 diff --git a/CN/modules/ROOT/pages/master/9.adoc b/CN/modules/ROOT/pages/master/9.adoc index fb06eff..2432ee4 100644 --- a/CN/modules/ROOT/pages/master/9.adoc +++ b/CN/modules/ROOT/pages/master/9.adoc @@ -21,7 +21,7 @@ | | dropuser | dropuser移除一个已有的IvorySQL用户。只有超级用户以及具有 `CREATEROLE` 特权的用户能够移除IvorySQL用户(要移除一个超级用户,你必须自己是一个超级用户)。dropuser是SQL命令 http://www.postgresql.org/docs/17/sql-droprole.html[DROP ROLE] 的一个包装器。在通过这个工具和其他方法访问服务器来删除用户之间没有实质性的区别。 | | ecpg | `ecpg` 是用于 C 程序的嵌入式 SQL 预处理器。它通过将 SQL 调用替换为特殊函数调用把带有嵌入式 SQL 语句的 C 程序转换为普通 C 代码。输出文件可以被任何 C 编译器工具链处理。`ecpg` 将把命令行中给出的每一个输入文件转换为相应的 C 输出文件。 如果输入文件名没有任何扩展名,则假定为 `.pgc`。文件扩展名将由 `.c` 替换以构造输出文件名。 但是输出文件名可以使用 `-o` 选项覆盖。如果输入文件名只是 `-`,`ecpg` 从标准输入 读取程序(并写入标准输出,除非用 `-o` 重写)。 | | pg_amcheck | pg_amcheck支持对一个或多个数据库运行 http://www.postgresql.org/docs/17/amcheck.html[amcheck] 的损坏检查函数,并提供选项来选择要检查的模式、表和索引、要执行的检查类型以及是否并行执行检查,如果是,按并行数建立连接并使用。当前仅支持表关系和btree索引。其他关系类型将自动跳过。如果指定了 `dbname`,则它应该是要检查的单个数据库的名称,并且不应该存在其他数据库选择选项。否则,如果存在任何数据库选择选项,将检查所有匹配的数据库。如果不存在此类选项,将选中默认数据库。数据库选择选项包括 `--all`,`--database` 和 `--exclude-database`。它们还包括 `--relation`,`--exclude-relation`, `--table`,`--exclude-table`,`--index`,和 `--exclude-index`,但仅当这些选项与三段式模式一起使用时(例如,`mydb*.myschema*.myrel*`)。最后,它们包括 `--schema` 和 `--exclude-schema` 当这些选项与两段式模式一起使用时(例如 `mydb*.myschema*` )。 -| | pg_basebackup | pg_basebackup被用于获得一个正在运行的IvorySQL数据库集簇的基础备份。获得这些备份不会影响数据库的其他客户端,并且可以被用于时间点恢复,以及用作一个日志传送或流复制后备服务器的开始点。pg_basebackup对数据库群集的文件进行精确复制,同时确保服务器自动进入和退出备份模式。备份总是从整个数据库集簇获得,不可能备份单个数据库或数据库对象。关于选择性备份,必须使用一个像 http://www.postgresql.org/docs/17/app-pgdump.html[pg_dump] 的工具。备份通过一个使用复制协议常规IvorySQL连接制作。该连接必须由一个具有 `REPLICATION` 权限或者具有超级用户权限的用户ID建立,并且 http://www.postgresql.org/docs/17/auth-pg-hba-conf.html[`pg_hba.conf`]必须允许该复制连接。该服务器还必须被配置,使 http://www.postgresql.org/docs/17/runtime-config-replication.html#GUC-MAX-WAL-SENDERS[max_wal_senders] 设置得足够高以提供至少一个walsender用于备份以及一个用于WAL流(如果使用流)。在同一时间可以有多个 `pg_basebackup` 运行,但是从性能的角度来说,只进行一次备份并且复制结果通常更好。pg_basebackup不仅能从主控机也能从后备机创建一个基础备份。要从后备机获得一个备份,设置后备机让它能接受复制连接(也就是,设置 `max_wal_senders` 和 http://www.postgresql.org/docs/17/runtime-config-replication.html#GUC-HOT-STANDBY[hot_standby],并且适当配置其 `pg_hba.conf` )。你将也需要在主控机上启用 http://www.postgresql.org/docs/17/runtime-config-wal.html#GUC-FULL-PAGE-WRITES[full_page_writes]。注意在来自后备机的备份中有一些限制:不会在被备份的数据库集簇中创建备份历史文件。 pg_basebackup不能强制备用服务器在备份结束时切换到新的WAL文件。 当正在使用 `-X none` 时,如果服务器上的写活动比较低,pg_basebackup可能需要等待很长时间,以便切换和归档备份所需要的最后的WAL文件。 在这种情况下,在主服务器上运行 `pg_switch_wal` 以触发立即的WAL文件切换可能是有用的。 如果在备份期间后备机被提升为主控机,备份会失败。 备份所需的所有 WAL 记录必须包含足够的全页写,这要求你在主控机上启用 `full_page_writes` 并且不使用一个类似pg_compresslog的工具以 `archive_command` 从 WAL 文件中移除全页写。每当pg_basebackup进行基本备份时,服务器的 `pg_stat_progress_basebackup` 视图将报告备份的进度。 +| | pg_basebackup | pg_basebackup被用于获得一个正在运行的IvorySQL数据库集簇的基础备份。获得这些备份不会影响数据库的其他客户端,并且可以被用于时间点恢复,以及用作一个日志传送或流复制后备服务器的开始点。pg_basebackup对数据库群集的文件进行精确复制,同时确保服务器自动进入和退出备份模式。备份总是从整个数据库集簇获得,不可能备份单个数据库或数据库对象。关于选择性备份,必须使用一个像 http://www.postgresql.org/docs/17/app-pgdump.html[pg_dump] 的工具。备份通过一个使用复制协议常规IvorySQL连接制作。该连接必须由一个具有 `REPLICATION` 权限或者具有超级用户权限的用户ID建立,并且 http://www.postgresql.org/docs/17/auth-pg-hba-conf.html[`pg_hba.conf`]必须允许该复制连接。该服务器还必须被配置,使 http://www.postgresql.org/docs/17/runtime-config-replication.html#GUC-MAX-WAL-SENDERS[max_wal_senders] 设置得足够高以提供至少一个walsender用于备份以及一个用于WAL流(如果使用流)。在同一时间可以有多个 `pg_basebackup` 运行,但是从性能的角度来说,只进行一次备份并且复制结果通常更好。pg_basebackup不仅能从主控机也能从后备机创建一个基础备份。要从后备机获得一个备份,设置后备机让它能接受复制连接(也就是,设置 `max_wal_senders` 和 http://www.postgresql.org/docs/17/runtime-config-replication.html#GUC-HOT-STANDBY[hot_standby],并且适当配置其 `pg_hba.conf` )。你将也需要在主控机上启用 http://www.postgresql.org/docs/17/runtime-config-wal.html#GUC-FULL-PAGE-WRITES[full_page_writes]。注意在来自后备机的备份中有一些限制:不会在被备份的数据库集簇中创建备份历史文件。 pg_basebackup不能强制后备服务器在备份结束时切换到新的WAL文件。 当正在使用 `-X none` 时,如果服务器上的写活动比较低,pg_basebackup可能需要等待很长时间,以便切换和归档备份所需要的最后的WAL文件。 在这种情况下,在主服务器上运行 `pg_switch_wal` 以触发立即的WAL文件切换可能是有用的。 如果在备份期间后备机被提升为主控机,备份会失败。 备份所需的所有 WAL 记录必须包含足够的全页写,这要求你在主控机上启用 `full_page_writes` 并且不使用一个类似pg_compresslog的工具以 `archive_command` 从 WAL 文件中移除全页写。每当pg_basebackup进行基本备份时,服务器的 `pg_stat_progress_basebackup` 视图将报告备份的进度。 | | pgbench | pgbench是一种在IvorySQL上运行基准测试的简单程序。它可能在并发的数据库会话中一遍一遍地运行相同序列的 SQL 命令,并且计算平均事务率(每秒的事务数)。默认情况下,pgbench会测试一种基于 TPC-B 但是要更宽松的场景,其中在每个事务中涉及五个 `SELECT`、 `UPDATE` 以及 `INSERT` 命令。但是,通过编写自己的事务脚本文件很容易用来测试其他情况。 | | pg_config | pg_config工具打印当前安装版本的IvorySQL的配置参数。它的设计目的之一是便于想与IvorySQL交互的软件包能够找到所需的头文件和库。 | | pg_dump | pg_dumppg_dump是用于备份一种IvorySQL数据库的工具。即使数据库正在被并发使用,它也能创建一致的备份。pg_dump不阻塞其他用户访问数据库(读取或写入)。pg_dump只转储单个数据库。要备份一个集簇或者集簇中 对于所有数据库公共的全局对象(例如角色和表空间),应使用 http://www.postgresql.org/docs/17/app-pg-dumpall.html[pg_dumpall]。转储可以被输出到脚本或归档文件格式。脚本转储是包含 SQL 命令的纯文本文件,它们可以用来重构数据库到它被转储时的状态。要从这样一个脚本恢复,将它输入到 http://www.postgresql.org/docs/17/app-psql.html[psql]。脚本文件甚至可以被用来在其他机器和其他架构上重构数据库。在经过一些修改后,甚至可以在其他 SQL 数据库产品上重构数据库。另一种可选的归档文件格式必须与 http://www.postgresql.org/docs/17/app-pgrestore.html[pg_restore] 配合使用来重建数据库。它们允许pg_restore能选择恢复什么,或者甚至在恢复之前对条目重排序。归档文件格式被设计为在架构之间可移植。当使用归档文件格式之一并与pg_restore组合时,pg_dump提供了一种灵活的归档和传输机制。pg_dump可以被用来备份整个数据库,然后pg_restore可以被用来检查归档并/或选择数据库的哪些部分要被恢复。最灵活的输出文件格式是“自定义”格式( `-Fc` )和“目录”格式( `-Fd` )。它们允许选择和重排序所有已归档项、支持并行恢复并且默认是压缩的。“目录”格式是唯一一种支持并行转储的格式。当运行pg_dump时,我们应该检查输出中有没有任何警告(打印在标准错误上) @@ -899,7 +899,7 @@ pg_basebackup — 获得一个IvorySQL集簇的一个基础备份 - `-R` `--write-recovery-conf` -创建一个 http://www.postgresql.org/docs/17/warm-standby.html#FILE-STANDBY-SIGNAL[`standby.signal`] 文件,并将连接设置附加到目标目录(或使用tar格式的基本存档文件中)的 `postgresql.auto.conf` 文件中。 这样可以简化使用备份结果设置备用服务器的过程。 `postgresql.auto.conf` 文件将记录连接设置(如果有)以及pg_basebackup所使用的复制槽,这样流复制后面就会使用相同的设置。 +创建一个 http://www.postgresql.org/docs/17/warm-standby.html#FILE-STANDBY-SIGNAL[`standby.signal`] 文件,并将连接设置附加到目标目录(或使用tar格式的基本存档文件中)的 `postgresql.auto.conf` 文件中。 这样可以简化使用备份结果设置后备服务器的过程。 `postgresql.auto.conf` 文件将记录连接设置(如果有)以及pg_basebackup所使用的复制槽,这样流复制后面就会使用相同的设置。 - `-T *olddir*=*newdir*` `--tablespace-mapping=*olddir*=*newdir*` @@ -3778,7 +3778,7 @@ pg_checksums — 在IvorySQL数据库集簇中启用、禁用或检查数据校 在大型集簇中启用校验和的时间可能很长。在此操作期间,写到数据目录的集簇或其它程序必须是未启动的,否则可能出现数据丢失。 -当复制设置与执行关系文件块的直接拷贝的工具(例如 http://www.postgresql.org/docs/17/app-pgrewind.html[pg_rewind])一起使用时,启用和禁用校验和会导致以不正确校验和形式出现的页面损坏,如果未在所有节点上执行一致的操作的话。故在复制设置中启用或禁用校验和时,推荐一致地切换所有集簇之前停止所有集簇。此外销毁所有备用数据库,在主数据库上执行操作,最后从头开始重建备用服务器,也是安全的。 +当复制设置与执行关系文件块的直接拷贝的工具(例如 http://www.postgresql.org/docs/17/app-pgrewind.html[pg_rewind])一起使用时,启用和禁用校验和会导致以不正确校验和形式出现的页面损坏,如果未在所有节点上执行一致的操作的话。故在复制设置中启用或禁用校验和时,推荐一致地切换所有集簇之前停止所有集簇。此外销毁所有备用数据库,在主数据库上执行操作,最后从头开始重建后备服务器,也是安全的。 如果在启用或禁用校验和时异常终止或杀掉pg_checksums,那么集簇的数据校验和配置保持不变,pg_checksums可以重新运行以执行相同操作。 @@ -4079,7 +4079,7 @@ pg_rewind — 把一个IvorySQL数据目录与另一个从该目录中复制出 如果在处理时pg_rewind失败,则目标的数据目录很可能不在可恢复的状态。在这种情况下,推荐创建一个新的备份。 -由于 pg_rewind 完全从源复制配置文件,因此可能需要在重新启动目标服务器之前更正用于恢复的配置,特别是当目标服务器作为源的备用服务器重新引入时。 如果在倒带操作完成后重新启动服务器但未配置恢复,则目标可能会再次与主服务器分离。 +由于 pg_rewind 完全从源复制配置文件,因此可能需要在重新启动目标服务器之前更正用于恢复的配置,特别是当目标服务器作为源的后备服务器重新引入时。 如果在倒带操作完成后重新启动服务器但未配置恢复,则目标可能会再次与主服务器分离。 如果pg_rewind发现它无法直接写入的文件,它将立刻失败。例如当源服务器和目标服务器为只读的SSL密钥及证书使用相同的文件映射,就会发生这种情况。如果在目标服务器上存在这样的文件,推荐在运行pg_rewind之前移除它们。在做了rewind之后,一些那样的文件可能已经被从源服务器拷贝,这样就有必要移除已经拷贝的数据并且恢复到rewind之前使用的链接集合。 diff --git a/CN/modules/ROOT/pages/master/cpu_arch_adp.adoc b/CN/modules/ROOT/pages/master/cpu_arch_adp.adoc index 2807bcf..e929a54 100644 --- a/CN/modules/ROOT/pages/master/cpu_arch_adp.adoc +++ b/CN/modules/ROOT/pages/master/cpu_arch_adp.adoc @@ -4,14 +4,12 @@ = **芯片架构适配** -IvorySQL适配如下CPU架构: +IvorySQL适配认证如下CPU架构: [cols="8h,~,~,~"] |==== -| 序号 | 架构名称 | 厂商名称 | 全平台介质包下载 +| 序号 | 架构名称 | 适配品牌 | 全平台介质包下载 | 1 | x86_64 | Intel、AMD、兆芯、海光 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.amd64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.x86_64.rpm[rpm] | 2 | aarch64 | 飞腾、鲲鹏 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.arm64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.aarch64.rpm[rpm] -| 3 | mips64el| 龙芯 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.rpm[rpm] -| 4 | loongarch64 | 龙芯 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.rpm[rpm] -| 5 | ppc64le | IBM | N/A -| 6 | sw_64 | 申威 | N/A +| 3 | mips64el| 龙芯3000,龙芯4000 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.rpm[rpm] +| 4 | loongarch64 | 龙芯5000 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.rpm[rpm] |==== diff --git a/CN/modules/ROOT/pages/master/os_arch_adp.adoc b/CN/modules/ROOT/pages/master/os_arch_adp.adoc index 721269e..854fed8 100644 --- a/CN/modules/ROOT/pages/master/os_arch_adp.adoc +++ b/CN/modules/ROOT/pages/master/os_arch_adp.adoc @@ -6,13 +6,13 @@ = **操作系统适配** -IvorySQL适配如下操作系统: +IvorySQL适配认证如下操作系统: [cols="8h,16h,~,~"] |==== | 序号 | 操作系统名称 | 操作系统简介 | 证书查看 -| 1 | 银河麒麟高级服务器操作系统 V11 | 银河麒麟高级服务器操作系统V11是麒麟软件依托多年技术研发积淀与丰富应用实践经验,为企业级关键业务量身打造的新一代服务器操作系统。产品以高可靠、高可用、高安全、高性能、高扩展为核心优势,在深度融合A 技术的基础上,更以自主创新突破构建起高速网络协议 MPTCP、热补丁管理、智能故障诊断、场景化优化及安全容器镜像仓库等关键技术体系,为党政机关信息化建设、重点行业数字化转型及国家重大工程实施,筑牢安全可信的支撑底座。 | image:kylin-v11.jpg[width=80%,link={imagesdir}/kylin-v11.jpg] +| 1 | 银河麒麟高级服务器操作系统 V11 | 银河麒麟高级服务器操作系统V11是麒麟软件依托多年技术研发积淀与丰富应用实践经验,为企业级关键业务量身打造的新一代服务器操作系统。产品以高可靠、高可用、高安全、高性能、高扩展为核心优势,在深度融合AI技术的基础上,更以自主创新突破构建起高速网络协议 MPTCP、热补丁管理、智能故障诊断、场景化优化及安全容器镜像仓库等关键技术体系,为党政机关信息化建设、重点行业数字化转型及国家重大工程实施,筑牢安全可信的支撑底座。 | image:kylin-v11.jpg[width=80%,link={imagesdir}/kylin-v11.jpg] | 2 | openKylin 2.0 SP1 | OpenAtom openKylin (简称“openKylin”) 是由开放原子开源基金会孵化及运营的开源项目,由基础软硬件企业、非营利性组织、社团组织、高等院校、科研机构和个人开发者共同创立,旨在以“为世界提供与人工智能技术深度融合的开源操作系统”为社区愿景,在开源、自愿、平等、协作的基础上,共同打造全球领先的智能桌面开源操作系统根社区,推动Linux开源技术及其软硬件生态繁荣发展。 | image:openKylin-2.0.png[width=80%,link={imagesdir}/openKylin-2.0.png] | 3 | OpenAnolis OS (龙蜥操作系统) 23 | 龙蜥操作系统 Anolis OS 23 是龙蜥社区(OpenAnolis)基于开源生态发展合作倡议,面向上游原生社区独立选型,持续演进并保障兼容性和稳定性的操作系统。Anolis OS 23 基于 Linux Kernel 6.6 LTS 的企业级操作系统,依托 ANCK 6.6 内核深度优化,全面支持海光、飞腾、龙芯(LoongArch)、兆芯等国产芯片及通用 x86_64/ARM64 架构。其针对虚拟化、安全特性及性能优化进行专项增强,并通过分层架构设计与智能调优工具,实现软硬协同性能最大化。同时原生支持 AI 生态组件,提供安全的 AI 容器镜像,加速模型开发与推理部署。在开发工具链方面,集成 GCC 12.3+/LLVM 17、Python 3.11、Rust 等,支持多平台高效开发。桌面生态方面兼容 GNOME、DDE 桌面环境,并通过集成玲珑包管理器满足了多样化场景需求实现生态扩展。Anolis OS 23 支持各类常见应用和国产化应用,助力企业实现高效、安全、可靠的数字化转型。 | image:OpenAnolis-23.jpg[width=80%,link={imagesdir}/OpenAnolis-23.jpg] -| 3 | deppin (深度操作系统) 20 | 深度操作系统是一个致力于为全球用户提供美观易用、安全可靠的Linux发行版。深度操作系统 20正式版(1002)采取统一的设计风格,从桌面环境和应用进行重新设计,带来焕然一新的视觉感受。底层仓库升级到Debian 10.5,系统安装采用双内核机制(Kernel 5.4、Kernel 5.7),全面提升系统稳定性和兼容性。全新设计的启动器菜单、指纹识别、系统安全增强等,系统部分预装应用升级到最新版本,只为给你更好体验。 | image:deepin-20.png[width=80%,link={imagesdir}/deepin-20.png] +| 4 | deppin (深度操作系统) 20 | 深度操作系统是一个致力于为全球用户提供美观易用、安全可靠的Linux发行版。深度操作系统 20正式版(1002)采取统一的设计风格,从桌面环境和应用进行重新设计,带来焕然一新的视觉感受。底层仓库升级到Debian 10.5,系统安装采用双内核机制(Kernel 5.4、Kernel 5.7),全面提升系统稳定性和兼容性。全新设计的启动器菜单、指纹识别、系统安全增强等,系统部分预装应用升级到最新版本,只为给你更好体验。 | image:deepin-20.png[width=80%,link={imagesdir}/deepin-20.png] |==== diff --git a/CN/modules/ROOT/pages/master/welcome.adoc b/CN/modules/ROOT/pages/master/welcome.adoc index 28affd8..efc8443 100644 --- a/CN/modules/ROOT/pages/master/welcome.adoc +++ b/CN/modules/ROOT/pages/master/welcome.adoc @@ -15,4 +15,4 @@ IvorySQL 项目是瀚高软件提出的一个开源项目,旨在将 Oracle 兼 IvorySQL 开源并且可以免费使用,如果您有任何建议请联系 support@ivorysql.org == 文档下载 -https://docs.ivorysql.org/cn/ivorysql-doc/v4.5/ivorysql.pdf[IvorySQL v4.5 pdf 文档] \ No newline at end of file +https://docs.ivorysql.org/cn/ivorysql-doc/v5.0/ivorysql.pdf[IvorySQL v5.0 pdf 文档] \ No newline at end of file diff --git a/EN/modules/ROOT/images/media/image10.png b/EN/modules/ROOT/images/media/image10.png new file mode 100644 index 0000000..7c3334d Binary files /dev/null and b/EN/modules/ROOT/images/media/image10.png differ diff --git a/EN/modules/ROOT/images/media/image11.png b/EN/modules/ROOT/images/media/image11.png new file mode 100644 index 0000000..56ffaab Binary files /dev/null and b/EN/modules/ROOT/images/media/image11.png differ diff --git a/EN/modules/ROOT/images/media/image12.png b/EN/modules/ROOT/images/media/image12.png new file mode 100644 index 0000000..4c6c784 Binary files /dev/null and b/EN/modules/ROOT/images/media/image12.png differ diff --git a/EN/modules/ROOT/images/media/image13.png b/EN/modules/ROOT/images/media/image13.png new file mode 100644 index 0000000..8ff015f Binary files /dev/null and b/EN/modules/ROOT/images/media/image13.png differ diff --git a/EN/modules/ROOT/images/media/image14.png b/EN/modules/ROOT/images/media/image14.png new file mode 100644 index 0000000..0f8ea22 Binary files /dev/null and b/EN/modules/ROOT/images/media/image14.png differ diff --git a/EN/modules/ROOT/images/media/image15.png b/EN/modules/ROOT/images/media/image15.png new file mode 100644 index 0000000..11ae1ce Binary files /dev/null and b/EN/modules/ROOT/images/media/image15.png differ diff --git a/EN/modules/ROOT/images/media/image16.png b/EN/modules/ROOT/images/media/image16.png new file mode 100644 index 0000000..a78afba Binary files /dev/null and b/EN/modules/ROOT/images/media/image16.png differ diff --git a/EN/modules/ROOT/images/media/image17.png b/EN/modules/ROOT/images/media/image17.png new file mode 100644 index 0000000..41f4178 Binary files /dev/null and b/EN/modules/ROOT/images/media/image17.png differ diff --git a/EN/modules/ROOT/images/media/image18.png b/EN/modules/ROOT/images/media/image18.png new file mode 100644 index 0000000..5eab6cd Binary files /dev/null and b/EN/modules/ROOT/images/media/image18.png differ diff --git a/EN/modules/ROOT/images/media/image19.png b/EN/modules/ROOT/images/media/image19.png new file mode 100644 index 0000000..4c05fa0 Binary files /dev/null and b/EN/modules/ROOT/images/media/image19.png differ diff --git a/EN/modules/ROOT/images/media/image20.png b/EN/modules/ROOT/images/media/image20.png new file mode 100644 index 0000000..8818128 Binary files /dev/null and b/EN/modules/ROOT/images/media/image20.png differ diff --git a/EN/modules/ROOT/images/media/image21.png b/EN/modules/ROOT/images/media/image21.png new file mode 100644 index 0000000..51d8c60 Binary files /dev/null and b/EN/modules/ROOT/images/media/image21.png differ diff --git a/EN/modules/ROOT/images/media/image22.png b/EN/modules/ROOT/images/media/image22.png new file mode 100644 index 0000000..41a6f74 Binary files /dev/null and b/EN/modules/ROOT/images/media/image22.png differ diff --git a/EN/modules/ROOT/images/media/image23.png b/EN/modules/ROOT/images/media/image23.png new file mode 100644 index 0000000..68444c1 Binary files /dev/null and b/EN/modules/ROOT/images/media/image23.png differ diff --git a/EN/modules/ROOT/images/media/image24.png b/EN/modules/ROOT/images/media/image24.png new file mode 100644 index 0000000..c63ef9b Binary files /dev/null and b/EN/modules/ROOT/images/media/image24.png differ diff --git a/EN/modules/ROOT/images/media/image25.png b/EN/modules/ROOT/images/media/image25.png new file mode 100644 index 0000000..7427fc7 Binary files /dev/null and b/EN/modules/ROOT/images/media/image25.png differ diff --git a/EN/modules/ROOT/images/media/image26.png b/EN/modules/ROOT/images/media/image26.png new file mode 100644 index 0000000..61e1007 Binary files /dev/null and b/EN/modules/ROOT/images/media/image26.png differ diff --git a/EN/modules/ROOT/images/media/image27.png b/EN/modules/ROOT/images/media/image27.png new file mode 100644 index 0000000..5dfa6fa Binary files /dev/null and b/EN/modules/ROOT/images/media/image27.png differ diff --git a/EN/modules/ROOT/images/media/image28.png b/EN/modules/ROOT/images/media/image28.png new file mode 100644 index 0000000..aa5fd09 Binary files /dev/null and b/EN/modules/ROOT/images/media/image28.png differ diff --git a/EN/modules/ROOT/images/media/image29.png b/EN/modules/ROOT/images/media/image29.png new file mode 100644 index 0000000..4e329ef Binary files /dev/null and b/EN/modules/ROOT/images/media/image29.png differ diff --git a/EN/modules/ROOT/images/media/image3.png b/EN/modules/ROOT/images/media/image3.png new file mode 100644 index 0000000..62902e6 Binary files /dev/null and b/EN/modules/ROOT/images/media/image3.png differ diff --git a/EN/modules/ROOT/images/media/image30.png b/EN/modules/ROOT/images/media/image30.png new file mode 100644 index 0000000..c164111 Binary files /dev/null and b/EN/modules/ROOT/images/media/image30.png differ diff --git a/EN/modules/ROOT/images/media/image31.png b/EN/modules/ROOT/images/media/image31.png new file mode 100644 index 0000000..bd660a8 Binary files /dev/null and b/EN/modules/ROOT/images/media/image31.png differ diff --git a/EN/modules/ROOT/images/media/image32.png b/EN/modules/ROOT/images/media/image32.png new file mode 100644 index 0000000..510d7dc Binary files /dev/null and b/EN/modules/ROOT/images/media/image32.png differ diff --git a/EN/modules/ROOT/images/media/image33.png b/EN/modules/ROOT/images/media/image33.png new file mode 100644 index 0000000..37352d1 Binary files /dev/null and b/EN/modules/ROOT/images/media/image33.png differ diff --git a/EN/modules/ROOT/images/media/image34.png b/EN/modules/ROOT/images/media/image34.png new file mode 100644 index 0000000..f8dabee Binary files /dev/null and b/EN/modules/ROOT/images/media/image34.png differ diff --git a/EN/modules/ROOT/images/media/image35.png b/EN/modules/ROOT/images/media/image35.png new file mode 100644 index 0000000..b2c7f4a Binary files /dev/null and b/EN/modules/ROOT/images/media/image35.png differ diff --git a/EN/modules/ROOT/images/media/image36.jpeg b/EN/modules/ROOT/images/media/image36.jpeg new file mode 100644 index 0000000..4b2e013 Binary files /dev/null and b/EN/modules/ROOT/images/media/image36.jpeg differ diff --git a/EN/modules/ROOT/images/media/image37.png b/EN/modules/ROOT/images/media/image37.png new file mode 100644 index 0000000..1c7a8a0 Binary files /dev/null and b/EN/modules/ROOT/images/media/image37.png differ diff --git a/EN/modules/ROOT/images/media/image38.jpeg b/EN/modules/ROOT/images/media/image38.jpeg new file mode 100644 index 0000000..a376a08 Binary files /dev/null and b/EN/modules/ROOT/images/media/image38.jpeg differ diff --git a/EN/modules/ROOT/images/media/image39.png b/EN/modules/ROOT/images/media/image39.png new file mode 100644 index 0000000..6c209b2 Binary files /dev/null and b/EN/modules/ROOT/images/media/image39.png differ diff --git a/EN/modules/ROOT/images/media/image4.png b/EN/modules/ROOT/images/media/image4.png new file mode 100644 index 0000000..545b07c Binary files /dev/null and b/EN/modules/ROOT/images/media/image4.png differ diff --git a/EN/modules/ROOT/images/media/image40.png b/EN/modules/ROOT/images/media/image40.png new file mode 100644 index 0000000..ca5858c Binary files /dev/null and b/EN/modules/ROOT/images/media/image40.png differ diff --git a/EN/modules/ROOT/images/media/image41.png b/EN/modules/ROOT/images/media/image41.png new file mode 100644 index 0000000..ebabdbb Binary files /dev/null and b/EN/modules/ROOT/images/media/image41.png differ diff --git a/EN/modules/ROOT/images/media/image42.png b/EN/modules/ROOT/images/media/image42.png new file mode 100644 index 0000000..4e1a44f Binary files /dev/null and b/EN/modules/ROOT/images/media/image42.png differ diff --git a/EN/modules/ROOT/images/media/image43.png b/EN/modules/ROOT/images/media/image43.png new file mode 100644 index 0000000..aebc64d Binary files /dev/null and b/EN/modules/ROOT/images/media/image43.png differ diff --git a/EN/modules/ROOT/images/media/image44.png b/EN/modules/ROOT/images/media/image44.png new file mode 100644 index 0000000..80d2d14 Binary files /dev/null and b/EN/modules/ROOT/images/media/image44.png differ diff --git a/EN/modules/ROOT/images/media/image45.jpeg b/EN/modules/ROOT/images/media/image45.jpeg new file mode 100644 index 0000000..53bd76d Binary files /dev/null and b/EN/modules/ROOT/images/media/image45.jpeg differ diff --git a/EN/modules/ROOT/images/media/image46.png b/EN/modules/ROOT/images/media/image46.png new file mode 100644 index 0000000..8c990ed Binary files /dev/null and b/EN/modules/ROOT/images/media/image46.png differ diff --git a/EN/modules/ROOT/images/media/image47.png b/EN/modules/ROOT/images/media/image47.png new file mode 100644 index 0000000..4b3354c Binary files /dev/null and b/EN/modules/ROOT/images/media/image47.png differ diff --git a/EN/modules/ROOT/images/media/image5.png b/EN/modules/ROOT/images/media/image5.png new file mode 100644 index 0000000..6aaf302 Binary files /dev/null and b/EN/modules/ROOT/images/media/image5.png differ diff --git a/EN/modules/ROOT/images/media/image6.png b/EN/modules/ROOT/images/media/image6.png new file mode 100644 index 0000000..7f11ad1 Binary files /dev/null and b/EN/modules/ROOT/images/media/image6.png differ diff --git a/EN/modules/ROOT/images/media/image7.png b/EN/modules/ROOT/images/media/image7.png new file mode 100644 index 0000000..10641ea Binary files /dev/null and b/EN/modules/ROOT/images/media/image7.png differ diff --git a/EN/modules/ROOT/images/media/image8.png b/EN/modules/ROOT/images/media/image8.png new file mode 100644 index 0000000..0038af2 Binary files /dev/null and b/EN/modules/ROOT/images/media/image8.png differ diff --git a/EN/modules/ROOT/images/media/image9.png b/EN/modules/ROOT/images/media/image9.png new file mode 100644 index 0000000..b779b1e Binary files /dev/null and b/EN/modules/ROOT/images/media/image9.png differ diff --git a/EN/modules/ROOT/nav.adoc b/EN/modules/ROOT/nav.adoc index 10273cf..743a06a 100644 --- a/EN/modules/ROOT/nav.adoc +++ b/EN/modules/ROOT/nav.adoc @@ -7,10 +7,18 @@ ** xref:master/3.3.adoc[Maintenance] * IvorySQL Advanced Feature ** xref:master/4.1.adoc[Installation] -** xref:master/4.2.adoc[Building Cluster] +** xref:master/4.2.adoc[Cluster] +** xref:master/4.5.adoc[Migration] ** xref:master/4.3.adoc[Developer] +** Containerization +*** xref:master/4.6.1.adoc[K8S deployment] +*** xref:master/4.6.2.adoc[Operator deployment] +*** xref:master/4.6.4.adoc[Docker & Podman deployment] +*** xref:master/4.6.3.adoc[Docker Swarm & Docker Compose deployment] ** xref:master/4.4.adoc[Operation Management] -** xref:master/4.5.adoc[Migration] +** Cloud Service Platform +*** xref:master/4.7.1.adoc[IvorySQL Cloud Installation] +*** xref:master/4.7.2.adoc[IvorySQL Cloud Usage] * IvorySQL Ecosystem ** xref:master/cpu_arch_adp.adoc[CPU Architecture Adaption] ** xref:master/os_arch_adp.adoc[Operating System Adaption] @@ -30,6 +38,9 @@ ** Query Processing *** xref:master/6.1.1.adoc[Dual Parser] ** Compatibility Framework +*** xref:master/7.1.adoc[Ivorysql frame design] +*** xref:master/7.2.adoc[GUC Framework] +*** xref:master/7.4.adoc[Dual-mode design] *** xref:master/6.2.1.adoc[initdb Process] ** Compatibility Features *** xref:master/6.3.1.adoc[like] @@ -49,28 +60,25 @@ *** xref:master/6.4.2.adoc[userenv] ** xref:master/6.5.adoc[GB18030 Character Set] * List of Oracle compatible features -** xref:master/7.1.adoc[1、Ivorysql frame design] -** xref:master/7.2.adoc[2、GUC Framework] -** xref:master/7.3.adoc[3、Case conversion] -** xref:master/7.4.adoc[4、Dual-mode design] -** xref:master/7.5.adoc[5、Compatible with Oracle like] -** xref:master/7.6.adoc[6、Compatible with Oracle anonymous block] -** xref:master/7.7.adoc[7、Compatible with Oracle functions and stored procedures] -** xref:master/7.8.adoc[8、Built-in data types and built-in functions] -** xref:master/7.9.adoc[9、Added Oracle compatibility mode ports and IP] -** xref:master/7.10.adoc[10、XML Function] -** xref:master/7.11.adoc[11、Compatible with Oracle sequence] -** xref:master/7.12.adoc[12、Package] -** xref:master/7.13.adoc[13、Invisible Columns] -** xref:master/7.14.adoc[14、RowID Column] -** xref:master/7.15.adoc[15、OUT Parameter] -** xref:master/7.16.adoc[16、%Type & %Rowtype] -** xref:master/7.17.adoc[17、NLS Parameters] -** xref:master/7.18.adoc[18、Force View] -** xref:master/7.19.adoc[19、Nested Subfunctions] -** xref:master/7.20.adoc[20、sys_guid Function] -** xref:master/7.21.adoc[21、Empty String to NULL] -** xref:master/7.22.adoc[22、CALL INTO] +** xref:master/7.3.adoc[1、Case conversion] +** xref:master/7.5.adoc[2、LIKE operator] +** xref:master/7.6.adoc[3、anonymous block] +** xref:master/7.7.adoc[4、functions and stored procedures] +** xref:master/7.8.adoc[5、Built-in data types and built-in functions] +** xref:master/7.9.adoc[6、ports and IP] +** xref:master/7.10.adoc[7、XML Function] +** xref:master/7.11.adoc[8、sequence] +** xref:master/7.12.adoc[9、Package] +** xref:master/7.13.adoc[10、Invisible Columns] +** xref:master/7.14.adoc[11、RowID Column] +** xref:master/7.15.adoc[12、OUT Parameter] +** xref:master/7.16.adoc[13、%Type & %Rowtype] +** xref:master/7.17.adoc[14、NLS Parameters] +** xref:master/7.18.adoc[15、Force View] +** xref:master/7.19.adoc[16、Nested Subfunctions] +** xref:master/7.20.adoc[17、sys_guid Function] +** xref:master/7.21.adoc[18、Empty String to NULL] +** xref:master/7.22.adoc[19、CALL INTO] * xref:master/8.adoc[Community contribution] * xref:master/9.adoc[Tool Reference] * xref:master/10.adoc[FAQ] diff --git a/EN/modules/ROOT/pages/master/1.adoc b/EN/modules/ROOT/pages/master/1.adoc index 02cdd83..695b03f 100644 --- a/EN/modules/ROOT/pages/master/1.adoc +++ b/EN/modules/ROOT/pages/master/1.adoc @@ -5,72 +5,201 @@ == Version Overview -[**Release date: June 04, 2025**] +[*Release Date: Nov 25, 2025*] -IvorySQL 4.5, based on PostgreSQL 17.5 and includes a variety of bug fixes. For a comprehensive list of updates, please visit our https://docs.ivorysql.org/[documentation site]. +IvorySQL 5.0, based on PostgreSQL 18.0, introduces significant Oracle compatibility improvements, PL/iSQL enhancements, and new globalization capabilities while refreshing packaging, automation, and tooling. For a comprehensive list of updates, please visit our https://docs.ivorysql.org/[documentation site]. -== Enhancements & Fixed Issue +== Enhancements -- PostgreSQL 17.5 Enhancements +- PostgreSQL 18.0 -1. Avoid one-byte buffer overread when examining invalidly-encoded strings that are claimed to be in GB18030 encoding. -2. Handle self-referential foreign keys on partitioned tables correctly. -3. Avoid data loss when merging compressed BRIN summaries in brin_bloom_union(). -4. Correctly process references to outer CTE names that appear within a WITH clause attached to an INSERT/UPDATE/DELETE/MERGE command that's inside WITH. -5. Fix ALTER TABLE ADD COLUMN to correctly handle the case of a domain type that has a default. +1. An asynchronous I/O (AIO) subsystem that can improve performance of sequential scans, bitmap heap scans, vacuums, and other operations. +2. pg_upgrade now retains optimizer statistics. +3. Support for "skip scan" lookups that allow using multicolumn B-tree indexes in more cases. +4. uuidv7() function for generating timestamp-ordered UUIDs. +5. Virtual generated columns that compute their values during read operations. This is now the default for generated columns. +6. OAuth authentication support. +7. OLD and NEW support for RETURNING clauses in INSERT, UPDATE, DELETE, and MERGE commands. +8. Temporal constraints, or constraints over ranges, for PRIMARY KEY, UNIQUE, and FOREIGN KEY constraints. -+ +For further details, visit https://www.postgresql.org/docs/release/18.0/[PostgreSQL’s release notes]. -For further details, visit https://www.postgresql.org/docs/release/17.5/[PostgreSQL’s release notes]. +== New Features +=== 21 New Oracle Compatibility Features -- IvorySQL 4.5 +- Oracle-compatible ROWID support: Feature https://github.com/IvorySQL/IvorySQL/issues/126[#126] + + Ensures IvorySQL row identifiers align with Oracle semantics for seamless cross-database tooling. -1. MIPS Packaging for All Platforms: Feature https://github.com/IvorySQL/IvorySQL/issues/736[#736] -+ -Provides multi-platform media packages for MIPS architecture, supporting both domestic and international mainstream operating systems, including Red Hat, Debian, Kylin, UOS, and NSAR OS, etc. +- PL/iSQL CALL invocation syntax: Feature https://github.com/IvorySQL/IvorySQL/issues/764[#764] + + Adds the Oracle-style `CALL` entry point so stored procedures can be invoked consistently. -2. IvorySQL Online trail: Feature https://github.com/IvorySQL/ivorysql-wasm/issues/1[#1] -+ -Provide users with a web-based platform to experience IvorySQL V4.5 in an online environment, enabling database interaction directly through a browser interface. +- PL/iSQL `%ROWTYPE` support: Feature https://github.com/IvorySQL/IvorySQL/issues/765[#765] + + Allows variables to mirror entire table or cursor rows for concise PL/iSQL coding. -3. Add code of conduct: Feature https://github.com/IvorySQL/IvorySQL/issues/808[#808] +- PL/iSQL `%TYPE` support: Feature https://github.com/IvorySQL/IvorySQL/issues/766[#766] + + Enables variables to adopt the data type of existing columns or variables to reduce drift. -4. Update the community contribution guide: Feature https://github.com/IvorySQL/ivorysql_docs/pull/121[#121] +- Case-sensitive compatibility switch: Feature https://github.com/IvorySQL/IvorySQL/issues/767[#767] + + Preserves identifier case to match Oracle behavior when required. -5. Automate Documentation Build and Website Update via Pull Requests: Feature https://github.com/IvorySQL/ivorysql_docs/issues/115[#115] +- NLS parameter compatibility: Feature https://github.com/IvorySQL/IvorySQL/issues/768[#768] + + Honors Oracle-style NLS settings such as `NLS_DATE_FORMAT` and `NLS_TIMESTAMP_FORMAT`. -6. Enhanced Contributor Workflow: Self-Assign Issues by using the '/assign' command: Feature https://github.com/IvorySQL/ivorysql_docs/issues/109[#109] +- Empty string to NULL translation: Feature https://github.com/IvorySQL/IvorySQL/issues/769[#769] + + Converts zero-length strings to NULL to match Oracle compatibility rules. -7. IvorySQL Operator V4 has been adapted to support IvorySQL 4.5, with upgrades to system component versions and database extension versions : Feature https://github.com/IvorySQL/ivory-operator/pull/79[#79] +- Parser switching capability: Feature https://github.com/IvorySQL/IvorySQL/issues/770[#770] + + Adds toggles between Oracle and PostgreSQL parsers for per-session flexibility. + +- GB18030 database encoding: Feature https://github.com/IvorySQL/IvorySQL/issues/771[#771] + + Provides GB18030 initialization and create-database options for full Chinese market coverage. + +- Oracle-compatible `SYS_GUID`: Feature https://github.com/IvorySQL/IvorySQL/issues/773[#773] + + Implements the Oracle `SYS_GUID` function to generate RAW-based GUIDs. + +- Oracle-compatible `SYS_CONTEXT`: Feature https://github.com/IvorySQL/IvorySQL/issues/774[#774] + + Delivers the Oracle `SYS_CONTEXT` API for querying session and environment metadata. + +- Oracle-compatible `USERENV`: Feature https://github.com/IvorySQL/IvorySQL/issues/775[#775] + + Adds the `USERENV` function so sessions can inspect Oracle-style contextual details. + +- Oracle-compatible function syntax: Feature https://github.com/IvorySQL/IvorySQL/issues/776[#776] + + Supports Oracle constructs such as EDITIONABLE/NONEDITIONABLE, `RETURN`, `IS`, and `OUT ... NOCOPY` options. + +- Oracle-compatible procedure syntax: Feature https://github.com/IvorySQL/IvorySQL/issues/777[#777] + + Enables procedure DDL with Oracle options, EXEC invocation, and ALTER PROCEDURE support. + +- libpq OUT parameter plumbing: Feature https://github.com/IvorySQL/IvorySQL/issues/778[#778] + + Extends client protocol handling so OUT parameters can be consumed like OCI. + +- Procedure OUT parameter support: Feature https://github.com/IvorySQL/IvorySQL/issues/779[#779] + + Allows stored procedures to declare IN, OUT, and IN OUT modes per Oracle conventions. + +- Function OUT parameter support: Feature https://github.com/IvorySQL/IvorySQL/issues/780[#780] + + Permits Oracle-style OUT parameters within functions, including IN OUT combinations. + +- Nested subprograms: Feature https://github.com/IvorySQL/IvorySQL/issues/781[#781] + + Introduces support for declaring functions or procedures within other subprograms, including overloading. + +- Oracle-compatible `INSTR`: Feature https://github.com/IvorySQL/IvorySQL/issues/782[#782] + + Matches Oracle `INSTR` behavior for substring searches and position checks. + +- Oracle-compatible FORCE VIEW: Feature https://github.com/IvorySQL/IvorySQL/issues/783[#783] + + Lets views be created even when referenced objects do not yet exist, mirroring Oracle's FORCE option. + +- Oracle-compatible LIKE operator: Feature https://github.com/IvorySQL/IvorySQL/issues/784[#784] + + Aligns pattern-matching semantics with Oracle for predictable wildcard behavior. + +=== Online Trial and Multi-Platform Distribution Packages + +- Online Experience: IvorySQL v5.0: Featuare https://github.com/IvorySQL/IvorySQL/issues/887[#887] + + An interactive, browser-based environment will be launched to allow users to explore and evaluate IvorySQL v5.0 in real time — no installation required. + +- Packaging for All Platforms: Featuare https://github.com/IvorySQL/IvorySQL/issues/949[#949] + + Provides multi-platform media packages for X86、ARM、MIPS、LoongArch architecture. + +=== Cloud-Native & Containerized + +- Containerized Deployment Support (Docker Compose & Docker Swarm): + Supports deployment of standalone IvorySQL databases and high-availability clusters in Docker Swarm and Docker Compose environments. + +- Containerized Deployment Support (Kubernetes): + Supports deployment of standalone IvorySQL databases and high-availability clusters on Kubernetes (K8S) using Helm. + +- IvorySQL Operator v5 released (Kubernetes): + The IvorySQL Operator v5 has been adapted to support IvorySQL v5.0, with upgrades to system component versions and database extension versions. + +- IvorySQL Cloud v5 released (Unified Lifecycle & Visual Control Plane): + Offers a fully managed, graphical control plane that handles IvorySQL v5 database subscriptions, orchestrates end-to-end lifecycle operations, and integrates surrounding ecosystem services. + +=== Support for 10 Additional PostgreSQL Extensions + +- pg_cron: Feature https://github.com/IvorySQL/IvorySQL/issues/882[#882] + + Scheduled job execution within the database layer will be available through pg_cron integration. + +- pgAudit: Feature https://github.com/IvorySQL/IvorySQL/issues/929[#929] + + Provides detailed session and/or object audit logging via the standard PostgreSQL logging facility. + +- PostGIS: Feature https://github.com/IvorySQL/IvorySQL/issues/880[#880] + + Spatial data processing and geospatial analytics will be enabled through PostGIS compatibility. + +- pgRouting: Feature https://github.com/IvorySQL/IvorySQL/issues/881[#881] + + Network and routing analysis features will be introduced with pgRouting support. + +- PGroonga: Feature https://github.com/IvorySQL/IvorySQL/issues/879[#879] + + Full-text search capabilities will be enhanced via planned PGroonga support. + +- ddlx: Feature https://github.com/IvorySQL/IvorySQL/issues/877[#877] + + Support for ddlx to enable advanced schema introspection and automated DDL generation. + +- pgsql-http: Feature https://github.com/IvorySQL/IvorySQL/issues/883[#883] + + Allow the database to initiate HTTP/HTTPS requests internally, enabling seamless communication between the database and external web services. + +- system_stats: Feature https://github.com/IvorySQL/IvorySQL/issues/946[#946] + + System level statistics will be provided by system_stats support. + +- plpgsql_check: Feature https://github.com/IvorySQL/IvorySQL/issues/915[#915] + + Static code analysis on PL/pgSQL functions to identify errors, warnings, and potential issues before runtime execution + +- pgvector: Feature https://github.com/IvorySQL/IvorySQL/issues/878[#878] + + Integration with pgvector to empower AI/ML workloads through native vector similarity search. + +== Fixed Issues + +- Repaired `unused_oids` and `duplicate_oids` catalog tooling so header scans correctly detect conflicts without false positives: Issue https://github.com/IvorySQL/IvorySQL/issues/841[#841] +- Added `.gitignore` coverage for `libpq/ivytest` artifacts to prevent generated binaries and logs from polluting developer trees: Issue https://github.com/IvorySQL/IvorySQL/issues/843[#843] +- Extended GitHub workflow regression runs to cover builds configured with `--with-libnuma`, preventing future breakages on NUMA-enabled hosts: Issue https://github.com/IvorySQL/IvorySQL/issues/869[#869] +- Enabled `psql` users to access CREATE PACKAGE syntax help via `\h create package`, closing the CLI documentation gap for PL/iSQL packages: Issue https://github.com/IvorySQL/IvorySQL/issues/936[#936] +- Eliminated the MainLoop dangling-pointer scenario that triggered intermittent segmentation faults under concurrency stress: Issue https://github.com/IvorySQL/IvorySQL/issues/898[#898] +- Re-enabled `oracle_test/modules/*/sql` cases by fixing harness assumptions so Oracle-compatibility suites execute end-to-end again: Issue https://github.com/IvorySQL/IvorySQL/issues/897[#897] +- Updated `README.md` and `README_CN.md` to reflect IvorySQL v5 feature set, packaging, and onboarding instructions: Issue https://github.com/IvorySQL/IvorySQL/issues/896[#896] +- Corrected globally unique index enforcement so related regression tests now pass reliably across supported platforms: Issue https://github.com/IvorySQL/IvorySQL/issues/894[#894] == Source Code -IvorySQL's development is maintained across two main repositories: +IvorySQL's development is maintained across four main repositories: -* IvorySQL database source code: https://github.com/IvorySQL/IvorySQL -* IvorySQL official website: https://github.com/IvorySQL/Ivory-www +- IvorySQL database source code: https://github.com/IvorySQL/IvorySQL +- IvorySQL official website: https://github.com/IvorySQL/Ivory-www +- IvorySQL documentation: https://github.com/IvorySQL/IvorySQL-docs +- IvorySQL Docker: https://github.com/IvorySQL/docker_library == Contributors The following individuals (in alphabetical order) have contributed to this release as patch authors, committers, reviewers, testers, or reporters of issues. -* Cary Huang -* Denis Lussier +* Carlos Chong +* ccwxl +* Cédric Villemain +* elodiefb * Fawei Zhao -* Flyingbeecd * Ge Sui * Grant Zhou -* Hulin Ji -* Hope Gao -* Lily Wang -* Renli Zou +* Imran Zaheer +* jerome-peng +* Jiaoshun Tian +* luss +* Martin Gerhardy +* msdnchina +* omstack +* otegami +* rophy +* Shaolin Chu * Shawn Yan * Shihua Yang * Shiji Niu -* Shoubo Wang -* Shuntian Jiao +* Shuisen Tong +* shlei6067 +* sjw1933 * Xiangyu Liang +* Xiaohui Liu * Xinjie Lv +* xuexiaoganghs +* Xueyu Gao +* yangchunwanwusheng +* Yanliang Lei +* Yasir Hussain Shah +* Yuan Li * Zheng Tao * Zhenhao Pan * Zhuoyan Shi \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/2.adoc b/EN/modules/ROOT/pages/master/2.adoc index 4a45da3..bca8d38 100644 --- a/EN/modules/ROOT/pages/master/2.adoc +++ b/EN/modules/ROOT/pages/master/2.adoc @@ -63,16 +63,22 @@ IvorySQL is a powerful open source object-relational database management system == Compatibility with Oracle -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/14[1. Ivorysql frame design] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/15[2. GUC Framework] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/16[3. Case conversion] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/17[4. Dual-mode design] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/18[5. Compatible with Oracle like] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/19[6. Compatible with Oracle anonymous block] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/20[7. Compatible with Oracle functions and stored procedures] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/21[8. Built-in data types and built-in functions] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/22[9. Added Oracle compatibility mode ports and IP] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/26[10. XML Function] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/27[11. Compatible with Oracle sequence] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/28[12. Package] -* https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/29[13. Invisible Columns] \ No newline at end of file +* Case conversion +* LIKE operator +* anonymous block +* functions and stored procedures +* Built-in data types and built-in functions +* ports and IP +* XML Function +* sequence +* Package +* Invisible Columns +* RowID Column +* OUT Parameter +* %Type & %Rowtype +* NLS Parameters +* Force View +* Nested Subfunctions +* sys_guid Function +* Empty String to NULL +* CALL INTO \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/3.1.adoc b/EN/modules/ROOT/pages/master/3.1.adoc index 12d90f2..8a1c16e 100644 --- a/EN/modules/ROOT/pages/master/3.1.adoc +++ b/EN/modules/ROOT/pages/master/3.1.adoc @@ -43,15 +43,15 @@ https://www.ionos.com/help/server-cloud-infrastructure/server-administration/cre Create or edit IvorySQL yum repository configuration /etc/yum.repos.d/ivorysql.repo ``` vim /etc/yum.repos.d/ivorysql.repo -[ivorysql4] -name=IvorySQL Server 4 $releasever - $basearch -baseurl=https://yum.highgo.com/dists/ivorysql-rpms/4/redhat/rhel-$releasever-$basearch +[ivorysql5] +name=IvorySQL Server 5 $releasever - $basearch +baseurl=https://yum.highgo.com/dists/ivorysql-rpms/5/redhat/rhel-$releasever-$basearch enabled=1 gpgcheck=0 ``` After saving and exiting, you can install IvorySQL 4 with the following steps ``` -$ sudo dnf install -y IvorySQL-4.5 +$ sudo dnf install -y ivorysql5-5.0 ``` [[setting-environment-variables]] @@ -61,9 +61,9 @@ $ sudo dnf install -y IvorySQL-4.5 Add below contents in ~/.bash_profile file and source to make it effective: ``` -PATH=/opt/IvorySQL-4.5/bin:$PATH +PATH=/usr/ivory-5/bin:$PATH export PATH -PGDATA=/opt/IvorySQL-4.5/data +PGDATA=/usr/ivory-5/data export PGDATA ``` ``` @@ -73,7 +73,7 @@ $ source ~/.bash_profile ** Initializing database ``` -$ initdb -D /opt/IvorySQL-4.5/data +$ initdb -D /usr/ivory-5/data ``` .... The -D option specifies the directory where the database cluster should be stored. This is the only information required by initdb, but you can avoid writing it by setting the PGDATA environment variable, which can be convenient since the database server can find the database directory later by the same variable. @@ -84,7 +84,7 @@ $ initdb -D /opt/IvorySQL-4.5/data ** Starting IvorySQL service ``` -$ pg_ctl -D /opt/IvorySQL-4.5/data -l ivory.log start +$ pg_ctl -D /usr/ivory-5/data -l ivory.log start ``` The -D option specifies the file system location of the database configuration files. If this option is omitted, the environment variable PGDATA in <> is used. -l option appends the server log output to filename. If the file does not exist, it is created. @@ -95,7 +95,7 @@ $ pg_ctl -D /opt/IvorySQL-4.5/data -l ivory.log start Confirm it’s successfully started: ``` $ ps -ef | grep postgres -ivorysql 3214 1 0 20:35 ? 00:00:00 /opt/IvorySQL-4.5/bin/postgres -D /opt/IvorySQL-4.5/data +ivorysql 3214 1 0 20:35 ? 00:00:00 /usr/ivory-5/bin/postgres -D /usr/ivory-5/data ivorysql 3215 3214 0 20:35 ? 00:00:00 postgres: checkpointer ivorysql 3216 3214 0 20:35 ? 00:00:00 postgres: background writer ivorysql 3218 3214 0 20:35 ? 00:00:00 postgres: walwriter @@ -108,19 +108,19 @@ ivorysql 3238 1551 0 20:35 pts/0 00:00:00 grep --color=auto postgres ** Get IvorySQL image from Docker Hub ``` -$ docker pull ivorysql/ivorysql:4.5-ubi8 +$ docker pull ivorysql/ivorysql:5.0-ubi8 ``` ** Running IvorySQL ``` -$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:4.5-ubi8 +$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:5.0-ubi8 ``` ** Check if the IvorySQL container is running successfully ``` $ docker ps | grep ivorysql CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -6faa2d0ed705 ivorysql:4.5-ubi8 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5866/tcp, 0.0.0.0:5434->5432/tcp ivorysql +6faa2d0ed705 ivorysql:5.0-ubi8 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5866/tcp, 0.0.0.0:5434->5432/tcp ivorysql ``` == Connecting to IvorySQL @@ -128,7 +128,7 @@ CONTAINER ID IMAGE COMMAND CREATED ST Connect to IovrySQL via psql: ``` $ psql -d -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# @@ -143,4 +143,4 @@ TIP: When running IvorySQL in Docker, additional parameters need to be added, li Now you can start your journey of IvorySQL! Enjoy! -To explore additional installation methods, please refer to the xref:v4.5/6.adoc[Installation]. \ No newline at end of file +To explore additional installation methods, please refer to the xref:v5.0/6.adoc[Installation]. \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/3.2.adoc b/EN/modules/ROOT/pages/master/3.2.adoc index cabb53b..789859c 100644 --- a/EN/modules/ROOT/pages/master/3.2.adoc +++ b/EN/modules/ROOT/pages/master/3.2.adoc @@ -150,29 +150,29 @@ The `pg_stat_activity` view will have one row per server process, showing inform .**`pg_stat_activity` View** |==== -|Column TypeDescription -| `datid` `oid`OID of the database this backend is connected to -| `datname` `name`Name of the database this backend is connected to -| `pid` `integer`Process ID of this backend -| `leader_pid` `integer`Process ID of the parallel group leader, if this process is a parallel query worker. `NULL` if this process is a parallel group leader or does not participate in parallel query. -| `usesysid` `oid`OID of the user logged into this backend -| `usename` `name`Name of the user logged into this backend -| `application_name` `text`Name of the application that is connected to this backend -| `client_addr` `inet`IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. -| `client_hostname` `text`Host name of the connected client, as reported by a reverse DNS lookup of `client_addr`. This field will only be non-null for IP connections, and only when https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-HOSTNAME[log_hostname] is enabled. -| `client_port` `integer`TCP port number that the client is using for communication with this backend, or `-1` if a Unix socket is used. If this field is null, it indicates that this is an internal server process. -| `backend_start` `timestamp with time zone`Time when this process was started. For client backends, this is the time the client connected to the server. -| `xact_start` `timestamp with time zone`Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the `query_start` column. -| `query_start` `timestamp with time zone`Time when the currently active query was started, or if `state` is not `active`, when the last query was started -| `state_change` `timestamp with time zone`Time when the `state` was last changed -| `wait_event_type` `text`The type of event for which the backend is waiting, if any; otherwise NULL. -| `wait_event` `text`Wait event name if backend is currently waiting, otherwise NULL. -| `state` `text`Current overall state of this backend. Possible values are:`active`: The backend is executing a query.`idle`: The backend is waiting for a new client command.`idle in transaction`: The backend is in a transaction, but is not currently executing a query.`idle in transaction (aborted)`: This state is similar to `idle in transaction`, except one of the statements in the transaction caused an error.`fastpath function call`: The backend is executing a fast-path function.`disabled`: This state is reported if https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES[track_activities] is disabled in this backend. -| `backend_xid` `xid`Top-level transaction identifier of this backend, if any. -| `backend_xmin` `xid`The current backend's `xmin` horizon. -| `query_id` `bigint`Identifier of this backend's most recent query. If `state` is `active` this field shows the identifier of the currently executing query. In all other states, it shows the identifier of last query that was executed. Query identifiers are not computed by default so this field will be null unless https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-COMPUTE-QUERY-ID[compute_query_id] parameter is enabled or a third-party module that computes query identifiers is configured. -| `query` `text`Text of this backend's most recent query. If `state` is `active` this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 bytes; this value can be changed via the parameter https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE[track_activity_query_size]. -| `backend_type` `text`Type of current backend. Possible types are `autovacuum launcher`, `autovacuum worker`, `logical replication launcher`, `logical replication worker`, `parallel worker`, `background writer`, `client backend`, `checkpointer`, `archiver`, `startup`, `walreceiver`, `walsender` and `walwriter`. In addition, background workers registered by extensions may have additional types. +| Column | Type | Description | +| `datid` | `oid` | OID of the database this backend is connected to | +| `datname` | `name` | Name of the database this backend is connected to | +| `pid` | `integer` | Process ID of this backend | +| `leader_pid` | `integer` | Process ID of the parallel group leader, if this process is a parallel query worker. `NULL` if this process is a parallel group leader or does not participate in parallel query. | +| `usesysid` | `oid` | OID of the user logged into this backend | +| `usename` | `name` | Name of the user logged into this backend | +| `application_name` | `text` | Name of the application that is connected to this backend | +| `client_addr` | `inet` | IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. | +| `client_hostname` | `text` | Host name of the connected client, as reported by a reverse DNS lookup of `client_addr`. This field will only be non-null for IP connections, and only when log_hostname is enabled. | +| `client_port` | `integer` | TCP port number that the client is using for communication with this backend, or `-1` if a Unix socket is used. If this field is null, it indicates that this is an internal server process. | +| `backend_start` | `timestamp with time zone` | Time when this process was started. For client backends, this is the time the client connected to the server. | +| `xact_start` | `timestamp with time zone` | Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the `query_start` column. | +| `query_start` | `timestamp with time zone` | Time when the currently active query was started, or if `state` is not `active`, when the last query was started | +| `state_change` | `timestamp with time zone` | Time when the `state` was last changed | +| `wait_event_type` | `text` | The type of event for which the backend is waiting, if any; otherwise NULL. | +| `wait_event` | `text` | Wait event name if backend is currently waiting, otherwise NULL. | +| `state` | `text` | Current overall state of this backend. Possible values are: `active`: The backend is executing a query. `idle`: The backend is waiting for a new client command. `idle in transaction`: The backend is in a transaction, but is not currently executing a query. `idle in transaction (aborted)`: This state is similar to `idle in transaction`, except one of the statements in the transaction caused an error. `fastpath function call`: The backend is executing a fast-path function. `disabled`: This state is reported if track_activities is disabled in this backend. | +| `backend_xid` | `xid` | Top-level transaction identifier of this backend, if any. | +| `backend_xmin` | `xid` | The current backend's `xmin` horizon. | +| `query_id` | `bigint` | Identifier of this backend's most recent query. If `state` is `active` this field shows the identifier of the currently executing query. In all other states, it shows the identifier of last query that was executed. Query identifiers are not computed by default so this field will be null unless compute_query_id parameter is enabled or a third-party module that computes query identifiers is configured. | +| `query` | `text` | Text of this backend's most recent query. If `state` is `active` this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 bytes; this value can be changed via the parameter track_activity_query_size. | +| `backend_type` | `text` | Type of current backend. Possible types are `autovacuum launcher`, `autovacuum worker`, `logical replication launcher`, `logical replication worker`, `parallel worker`, `background writer`, `client backend`, `checkpointer`, `archiver`, `startup`, `walreceiver`, `walsender` and `walwriter`. In addition, background workers registered by extensions may have additional types. | |==== .Note @@ -496,27 +496,27 @@ The `pg_stat_replication` view will contain one row per WAL sender process, show .**`pg_stat_replication` View** |==== -|Column TypeDescription -| `pid` `integer`Process ID of a WAL sender process -| `usesysid` `oid`OID of the user logged into this WAL sender process -| `usename` `name`Name of the user logged into this WAL sender process -| `application_name` `text`Name of the application that is connected to this WAL sender -| `client_addr` `inet`IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine. -| `client_hostname` `text`Host name of the connected client, as reported by a reverse DNS lookup of `client_addr`. This field will only be non-null for IP connections, and only when https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-HOSTNAME[log_hostname] is enabled. -| `client_port` `integer`TCP port number that the client is using for communication with this WAL sender, or `-1` if a Unix socket is used -| `backend_start` `timestamp with time zone`Time when this process was started, i.e., when the client connected to this WAL sender -| `backend_xmin` `xid`This standby's `xmin` horizon reported by https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK[hot_standby_feedback]. -| `state` `text`Current WAL sender state. Possible values are:`startup`: This WAL sender is starting up.`catchup`: This WAL sender's connected standby is catching up with the primary.`streaming`: This WAL sender is streaming changes after its connected standby server has caught up with the primary.`backup`: This WAL sender is sending a backup.`stopping`: This WAL sender is stopping. -| `sent_lsn` `pg_lsn`Last write-ahead log location sent on this connection -| `write_lsn` `pg_lsn`Last write-ahead log location written to disk by this standby server -| `flush_lsn` `pg_lsn`Last write-ahead log location flushed to disk by this standby server -| `replay_lsn` `pg_lsn`Last write-ahead log location replayed into the database on this standby server -| `write_lag` `interval`Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that `synchronous_commit` level `remote_write` incurred while committing if this server was configured as a synchronous standby. -| `flush_lag` `interval`Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that `synchronous_commit` level `on` incurred while committing if this server was configured as a synchronous standby. -| `replay_lag` `interval`Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that `synchronous_commit` level `remote_apply` incurred while committing if this server was configured as a synchronous standby. -| `sync_priority` `integer`Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication. -| `sync_state` `text`Synchronous state of this standby server. Possible values are:`async`: This standby server is asynchronous.`potential`: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails.`sync`: This standby server is synchronous.`quorum`: This standby server is considered as a candidate for quorum standbys. -| `reply_time` `timestamp with time zone`Send time of last reply message received from standby server +| Column | Type | Description +| `pid` | `integer` | Process ID of a WAL sender process +| `usesysid` | `oid` | OID of the user logged into this WAL sender process +| `usename` | `name` | Name of the user logged into this WAL sender process +| `application_name` | `text` | Name of the application that is connected to this WAL sender +| `client_addr` | `inet` | IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine. +| `client_hostname` | `text` | Host name of the connected client, as reported by a reverse DNS lookup of `client_addr`. This field will only be non-null for IP connections, and only when log_hostname is enabled. +| `client_port` | `integer` | TCP port number that the client is using for communication with this WAL sender, or `-1` if a Unix socket is used +| `backend_start` | `timestamp with time zone` | Time when this process was started, i.e., when the client connected to this WAL sender +| `backend_xmin` | `xid` | This standby's `xmin` horizon reported by hot_standby_feedback. +| `state` | `text` | Current WAL sender state. Possible values are: `startup`: This WAL sender is starting up. `catchup`: This WAL sender's connected standby is catching up with the primary. `streaming`: This WAL sender is streaming changes after its connected standby server has caught up with the primary. `backup`: This WAL sender is sending a backup. `stopping`: This WAL sender is stopping. +| `sent_lsn` | `pg_lsn` | Last write-ahead log location sent on this connection +| `write_lsn` | `pg_lsn` | Last write-ahead log location written to disk by this standby server +| `flush_lsn` | `pg_lsn` | Last write-ahead log location flushed to disk by this standby server +| `replay_lsn` | `pg_lsn` | Last write-ahead log location replayed into the database on this standby server +| `write_lag` | `interval` | Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that `synchronous_commit` level `remote_write` incurred while committing if this server was configured as a synchronous standby. +| `flush_lag` | `interval` | Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that `synchronous_commit` level `on` incurred while committing if this server was configured as a synchronous standby. +| `replay_lag` | `interval` | Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that `synchronous_commit` level `remote_apply` incurred while committing if this server was configured as a synchronous standby. +| `sync_priority` | `integer` | Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication. +| `sync_state` | `text` | Synchronous state of this standby server. Possible values are: `async`: This standby server is asynchronous. `potential`: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails. `sync`: This standby server is synchronous. `quorum`: This standby server is considered as a candidate for quorum standbys. +| `reply_time` | `timestamp with time zone` | Send time of last reply message received from standby server |==== The lag times reported in the `pg_stat_replication` view are measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to know about it. These times represent the commit delay that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the `replay_lag` column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL. @@ -534,17 +534,17 @@ The `pg_stat_replication_slots` view will contain one row per logical replicatio .**`pg_stat_replication_slots` View** |==== -| Column TypeDescription -| `slot_name` `text`A unique, cluster-wide identifier for the replication slot -| `spill_txns` `bigint`Number of transactions spilled to disk once the memory used by logical decoding to decode changes from WAL has exceeded `logical_decoding_work_mem`. The counter gets incremented for both top-level transactions and subtransactions. -| `spill_count` `bigint`Number of times transactions were spilled to disk while decoding changes from WAL for this slot. This counter is incremented each time a transaction is spilled, and the same transaction may be spilled multiple times. -| `spill_bytes` `bigint`Amount of decoded transaction data spilled to disk while performing decoding of changes from WAL for this slot. This and other spill counters can be used to gauge the I/O which occurred during logical decoding and allow tuning `logical_decoding_work_mem`. -| `stream_txns` `bigint`Number of in-progress transactions streamed to the decoding output plugin after the memory used by logical decoding to decode changes from WAL for this slot has exceeded `logical_decoding_work_mem`. Streaming only works with top-level transactions (subtransactions can't be streamed independently), so the counter is not incremented for subtransactions. -| `stream_count``bigint`Number of times in-progress transactions were streamed to the decoding output plugin while decoding changes from WAL for this slot. This counter is incremented each time a transaction is streamed, and the same transaction may be streamed multiple times. -| `stream_bytes``bigint`Amount of transaction data decoded for streaming in-progress transactions to the decoding output plugin while decoding changes from WAL for this slot. This and other streaming counters for this slot can be used to tune `logical_decoding_work_mem`. -| `total_txns` `bigint`Number of decoded transactions sent to the decoding output plugin for this slot. This counts top-level transactions only, and is not incremented for subtransactions. Note that this includes the transactions that are streamed and/or spilled. -| `total_bytes``bigint`Amount of transaction data decoded for sending transactions to the decoding output plugin while decoding changes from WAL for this slot. Note that this includes data that is streamed and/or spilled. -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `slot_name` | `text` | A unique, cluster-wide identifier for the replication slot +| `spill_txns` | `bigint` | Number of transactions spilled to disk once the memory used by logical decoding to decode changes from WAL has exceeded `logical_decoding_work_mem`. The counter gets incremented for both top-level transactions and subtransactions. +| `spill_count` | `bigint` | Number of times transactions were spilled to disk while decoding changes from WAL for this slot. This counter is incremented each time a transaction is spilled, and the same transaction may be spilled multiple times. +| `spill_bytes` | `bigint` | Amount of decoded transaction data spilled to disk while performing decoding of changes from WAL for this slot. This and other spill counters can be used to gauge the I/O which occurred during logical decoding and allow tuning `logical_decoding_work_mem`. +| `stream_txns` | `bigint` | Number of in-progress transactions streamed to the decoding output plugin after the memory used by logical decoding to decode changes from WAL for this slot has exceeded `logical_decoding_work_mem`. Streaming only works with top-level transactions (subtransactions can't be streamed independently), so the counter is not incremented for subtransactions. +| `stream_count` | `bigint` | Number of times in-progress transactions were streamed to the decoding output plugin while decoding changes from WAL for this slot. This counter is incremented each time a transaction is streamed, and the same transaction may be streamed multiple times. +| `stream_bytes` | `bigint` | Amount of transaction data decoded for streaming in-progress transactions to the decoding output plugin while decoding changes from WAL for this slot. This and other streaming counters for this slot can be used to tune `logical_decoding_work_mem`. +| `total_txns` | `bigint` | Number of decoded transactions sent to the decoding output plugin for this slot. This counts top-level transactions only, and is not incremented for subtransactions. Note that this includes the transactions that are streamed and/or spilled. +| `total_bytes` | `bigint` | Amount of transaction data decoded for sending transactions to the decoding output plugin while decoding changes from WAL for this slot. Note that this includes data that is streamed and/or spilled. +| `stats_reset` | `timestamp with time zone` | Time at which these statistics were last reset |==== ==== `pg_stat_wal_receiver` @@ -553,22 +553,22 @@ The `pg_stat_wal_receiver` view will contain only one row, showing statistics ab .**`pg_stat_wal_receiver` View** |==== -| Column TypeDescription -| `pid` `integer`Process ID of the WAL receiver process -| `status` `text`Activity status of the WAL receiver process -| `receive_start_lsn` `pg_lsn`First write-ahead log location used when WAL receiver is started -| `receive_start_tli` `integer`First timeline number used when WAL receiver is started -| `written_lsn` `pg_lsn`Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks. -| `flushed_lsn` `pg_lsn`Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started -| `received_tli` `integer`Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started -| `last_msg_send_time` `timestamp with time zone`Send time of last message received from origin WAL sender -| `last_msg_receipt_time` `timestamp with time zone`Receipt time of last message received from origin WAL sender -| `latest_end_lsn` `pg_lsn`Last write-ahead log location reported to origin WAL sender | -| `latest_end_time` `timestamp with time zone`Time of last write-ahead log location reported to origin WAL sender -| `slot_name` `text`Replication slot name used by this WAL receiver -| `sender_host` `text`Host of the IvorySQL instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with `/`.) -| `sender_port` `integer`Port number of the IvorySQL instance this WAL receiver is connected to. -| `conninfo` `text`Connection string used by this WAL receiver, with security-sensitive fields obfuscated. +| Column | Type | Description +| `pid` | `integer` | Process ID of the WAL receiver process +| `status` | `text` | Activity status of the WAL receiver process +| `receive_start_lsn` | `pg_lsn` | First write-ahead log location used when WAL receiver is started +| `receive_start_tli` | `integer` | First timeline number used when WAL receiver is started +| `written_lsn` | `pg_lsn` | Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks. +| `flushed_lsn` | `pg_lsn` | Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started +| `received_tli` | `integer` | Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started +| `last_msg_send_time` | `timestamp with time zone` | Send time of last message received from origin WAL sender +| `last_msg_receipt_time` | `timestamp with time zone` | Receipt time of last message received from origin WAL sender +| `latest_end_lsn` | `pg_lsn` | Last write-ahead log location reported to origin WAL sender +| `latest_end_time` | `timestamp with time zone` | Time of last write-ahead log location reported to origin WAL sender +| `slot_name` | `text` | Replication slot name used by this WAL receiver +| `sender_host` | `text` | Host of the IvorySQL instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with `/`.) +| `sender_port` | `integer` | Port number of the IvorySQL instance this WAL receiver is connected to. +| `conninfo` | `text` | Connection string used by this WAL receiver, with security-sensitive fields obfuscated. |==== ==== `pg_stat_recovery_prefetch` @@ -577,33 +577,33 @@ The `pg_stat_recovery_prefetch` view will contain only one row. The columns `wal .**`pg_stat_recovery_prefetch` View** |==== -| Column TypeDescription -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset -| `prefetch` `bigint`Number of blocks prefetched because they were not in the buffer pool -| `hit` `bigint`Number of blocks not prefetched because they were already in the buffer pool -| `skip_init` `bigint`Number of blocks not prefetched because they would be zero-initialized -| `skip_new` `bigint`Number of blocks not prefetched because they didn't exist yet | -| `skip_fpw` `bigint`Number of blocks not prefetched because a full page image was included in the WAL -| `skip_rep` `bigint`Number of blocks not prefetched because they were already recently prefetched -| `wal_distance` `int`How many bytes ahead the prefetcher is looking -| `block_distance` `int`How many blocks ahead the prefetcher is looking -| `io_depth` `int`How many prefetches have been initiated but are not yet known to have completed +| Column | Type | Description +| stats_reset | timestamp with time zone | Time at which these statistics were last reset +| prefetch | bigint | Number of blocks prefetched because they were not in the buffer pool +| hit | bigint | Number of blocks not prefetched because they were already in the buffer pool +| skip_init | bigint | Number of blocks not prefetched because they would be zero-initialized +| skip_new | bigint | Number of blocks not prefetched because they didn't exist yet +| skip_fpw | bigint | Number of blocks not prefetched because a full page image was included in the WAL +| skip_rep | bigint | Number of blocks not prefetched because they were already recently prefetched +| wal_distance | int | How many bytes ahead the prefetcher is looking +| block_distance | int | How many blocks ahead the prefetcher is looking +| io_depth | int | How many prefetches have been initiated but are not yet known to have completed |==== ==== `pg_stat_subscription` .**`pg_stat_subscription` View** |==== -| Column TypeDescription -| `subid` `oid`OID of the subscription -| `subname` `name`Name of the subscription -| `pid` `integer`Process ID of the subscription worker process -| `relid` `oid`OID of the relation that the worker is synchronizing; null for the main apply worker -| `received_lsn` `pg_lsn`Last write-ahead log location received, the initial value of this field being 0 -| `last_msg_send_time` `timestamp with time zone`Send time of last message received from origin WAL sender -| `last_msg_receipt_time` `timestamp with time zone`Receipt time of last message received from origin WAL sender -| `latest_end_lsn` `pg_lsn`Last write-ahead log location reported to origin WAL sender -| `latest_end_time` `timestamp with time zone`Time of last write-ahead log location reported to origin WAL sender +| Column | Type | Description +| `subid` | `oid` | OID of the subscription +| `subname` | `name` | Name of the subscription +| `pid` | `integer` | Process ID of the subscription worker process +| `relid` | `oid` | OID of the relation that the worker is synchronizing; null for the main apply worker +| `received_lsn` | `pg_lsn` | Last write-ahead log location received, the initial value of this field being 0 +| `last_msg_send_time` | `timestamp with time zone` | Send time of last message received from origin WAL sender +| `last_msg_receipt_time` | `timestamp with time zone` | Receipt time of last message received from origin WAL sender +| `latest_end_lsn` | `pg_lsn` | Last write-ahead log location reported to origin WAL sender +| `latest_end_time` | `timestamp with time zone` | Time of last write-ahead log location reported to origin WAL sender |==== ==== `pg_stat_subscription_stats` @@ -612,12 +612,12 @@ The `pg_stat_subscription_stats` view will contain one row per subscription. .**`pg_stat_subscription_stats` View** |==== -| Column TypeDescription -| `subid` `oid`OID of the subscription -| `subname` `name`Name of the subscription -| `apply_error_count` `bigint`Number of times an error occurred while applying changes -| `sync_error_count` `bigint`Number of times an error occurred during the initial table synchronization -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `subid` | `oid` | OID of the subscription +| `subname` | `name` | Name of the subscription +| `apply_error_count` | `bigint` | Number of times an error occurred while applying changes +| `sync_error_count` | `bigint` | Number of times an error occurred during the initial table synchronization +| `stats_reset` | `timestamp with time zone` | Time at which these statistics were last reset |==== ==== `pg_stat_ssl` @@ -626,15 +626,15 @@ The `pg_stat_ssl` view will contain one row per backend or WAL sender process, s .**`pg_stat_ssl` View** |==== -| Column TypeDescription -| `pid` `integer`Process ID of a backend or WAL sender process -| `ssl` `boolean`True if SSL is used on this connection -| `version` `text`Version of SSL in use, or NULL if SSL is not in use on this connection -| `cipher` `text`Name of SSL cipher in use, or NULL if SSL is not in use on this connection -| `bits` `integer`Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection -| `client_dn` `text`Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than `NAMEDATALEN` (64 characters in a standard build). -| `client_serial` `numeric`Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers). -| `issuer_dn` `text`DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like `client_dn`. +| Column | Type | Description +| `pid` | `integer` | Process ID of a backend or WAL sender process +| `ssl` | `boolean` | True if SSL is used on this connection +| `version` | `text` | Version of SSL in use, or NULL if SSL is not in use on this connection +| `cipher` | `text` | Name of SSL cipher in use, or NULL if SSL is not in use on this connection +| `bits` | `integer` | Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection +| `client_dn` | `text` | Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than `NAMEDATALEN` (64 characters in a standard build) +| `client_serial` | `numeric` | Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers) +| `issuer_dn` | `text` | DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like `client_dn` |==== ==== `pg_stat_gssapi` @@ -643,11 +643,11 @@ The `pg_stat_gssapi` view will contain one row per backend, showing information .**`pg_stat_gssapi` View** |==== -| Column TypeDescription -| `pid` `integer`Process ID of a backend -| `gss_authenticated` `boolean`True if GSSAPI authentication was used for this connection -| `principal` `text`Principal used to authenticate this connection, or NULL if GSSAPI was not used to authenticate this connection. This field is truncated if the principal is longer than `NAMEDATALEN` (64 characters in a standard build). -| `encrypted` `boolean`True if GSSAPI encryption is in use on this connection +| Column | Type | Description +| `pid` | `integer` | Process ID of a backend +| `gss_authenticated` | `boolean` | True if GSSAPI authentication was used for this connection +| `principal` | `text` | Principal used to authenticate this connection, or NULL if GSSAPI was not used. Truncated to `NAMEDATALEN` (64 characters in a standard build) if longer. +| `encrypted` | `boolean` | True if GSSAPI encryption is in use on this connection |==== ==== `pg_stat_archiver` @@ -656,13 +656,14 @@ The `pg_stat_archiver` view will always have a single row, containing data about .**`pg_stat_archiver` View** |==== -| `archived_count` `bigint`Number of WAL files that have been successfully archived -| `last_archived_wal` `text`Name of the WAL file most recently successfully archived -| `last_archived_time` `timestamp with time zone`Time of the most recent successful archive operation -| `failed_count` `bigint`Number of failed attempts for archiving WAL files -| `last_failed_wal` `text`Name of the WAL file of the most recent failed archival operation -| `last_failed_time` `timestamp with time zone`Time of the most recent failed archival operation -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `archived_count` | `bigint` | Number of WAL files that have been successfully archived +| `last_archived_wal` | `text` | Name of the WAL file most recently successfully archived +| `last_archived_time` | `timestamp with time zone` | Time of the most recent successful archive operation +| `failed_count` | `bigint` | Number of failed attempts for archiving WAL files +| `last_failed_wal` | `text` | Name of the WAL file of the most recent failed archival operation +| `last_failed_time` | `timestamp with time zone` | Time of the most recent failed archival operation +| `stats_reset` | `timestamp with time zone` | Time at which these statistics were last reset |==== Normally, WAL files are archived in order, oldest to newest, but that is not guaranteed, and does not hold under special circumstances like when promoting a standby or after crash recovery. Therefore it is not safe to assume that all files older than `last_archived_wal` have also been successfully archived. @@ -673,18 +674,18 @@ The `pg_stat_bgwriter` view will always have a single row, containing global dat .**`pg_stat_bgwriter` View** |==== -| Column TypeDescription -| `checkpoints_timed` `bigint`Number of scheduled checkpoints that have been performed -| `checkpoints_req` `bigint`Number of requested checkpoints that have been performed -| `checkpoint_write_time` `double precision`Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds -| `checkpoint_sync_time` `double precision`Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds -| `buffers_checkpoint` `bigint`Number of buffers written during checkpoints -| `buffers_clean` `bigint`Number of buffers written by the background writer -| `maxwritten_clean` `bigint`Number of times the background writer stopped a cleaning scan because it had written too many buffers -| `buffers_backend` `bigint`Number of buffers written directly by a backend -| `buffers_backend_fsync` `bigint`Number of times a backend had to execute its own `fsync` call (normally the background writer handles those even when the backend does its own write) -| `buffers_alloc` `bigint`Number of buffers allocated -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `checkpoints_timed` | `bigint` | Number of scheduled checkpoints that have been performed +| `checkpoints_req` | `bigint` | Number of requested checkpoints that have been performed +| `checkpoint_write_time` | `double precision` | Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds +| `checkpoint_sync_time` | `double precision` | Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds +| `buffers_checkpoint` | `bigint` | Number of buffers written during checkpoints +| `buffers_clean` | `bigint` | Number of buffers written by the background writer +| `maxwritten_clean` | `bigint` | Number of times the background writer stopped a cleaning scan because it had written too many buffers +| `buffers_backend` | `bigint` | Number of buffers written directly by a backend +| `buffers_backend_fsync` | `bigint` | Number of times a backend had to execute its own `fsync` call (normally the background writer handles those even when the backend does its own write) +| `buffers_alloc` | `bigint` | Number of buffers allocated +| `stats_reset` | `timestamp with time zone` | Time at which these statistics were last reset |==== ==== `pg_stat_wal` @@ -693,16 +694,23 @@ The `pg_stat_wal` view will always have a single row, containing data about WAL .**`pg_stat_wal` View** |==== -| Column TypeDescription -| `wal_records` `bigint`Total number of WAL records generated -| `wal_fpi` `bigint`Total number of WAL full page images generated -| `wal_bytes` `numeric`Total amount of WAL generated in bytes -| `wal_buffers_full` `bigint`Number of times WAL data was written to disk because WAL buffers became full -| `wal_write` `bigint`Number of times WAL buffers were written out to disk via `XLogWrite` request. -| `wal_sync` `bigint`Number of times WAL files were synced to disk via `issue_xlog_fsync` request (if https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-FSYNC[fsync] is `on` and https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-SYNC-METHOD[wal_sync_method] is either `fdatasync`, `fsync` or `fsync_writethrough`, otherwise zero). -| `wal_write_time` `double precision`Total amount of time spent writing WAL buffers to disk via `XLogWrite` request, in milliseconds (if https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-WAL-IO-TIMING[track_wal_io_timing] is enabled, otherwise zero). This includes the sync time when `wal_sync_method` is either `open_datasync` or `open_sync`. -| `wal_sync_time` `double precision`Total amount of time spent syncing WAL files to disk via `issue_xlog_fsync` request, in milliseconds (if `track_wal_io_timing` is enabled, `fsync` is `on`, and `wal_sync_method` is either `fdatasync`, `fsync` or `fsync_writethrough`, otherwise zero). -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `pid` | `integer` | Process ID of backend. +| `datid` | `oid` | OID of the database to which this backend is connected. +| `datname` | `name` | Name of the database to which this backend is connected. +| `relid` | `oid` | OID of the table on which the index is being created. +| `index_relid` | `oid` | OID of the index being created or reindexed. During a non-concurrent `CREATE INDEX`, this is 0. +| `command` | `text` | The command that is running: `CREATE INDEX`, `CREATE INDEX CONCURRENTLY`, `REINDEX`, or `REINDEX CONCURRENTLY`. +| `phase` | `text` | Current processing phase of index creation. See [Table 1.39](https://www.postgresql.org/docs/current/progress-reporting.html#CREATE-INDEX-PHASES). +| `lockers_total` | `bigint` | Total number of lockers to wait for, when applicable. +| `lockers_done` | `bigint` | Number of lockers already waited for. +| `current_locker_pid` | `bigint` | Process ID of the locker currently being waited for. +| `blocks_total` | `bigint` | Total number of blocks to be processed in the current phase. +| `blocks_done` | `bigint` | Number of blocks already processed in the current phase. +| `tuples_total` | `bigint` | Total number of tuples to be processed in the current phase. +| `tuples_done` | `bigint` | Number of tuples already processed in the current phase. +| `partitions_total` | `bigint` | When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is `0` during a `REINDEX`. +| `partitions_done` | `bigint` | When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is `0` during a `REINDEX`. |==== ==== `pg_stat_database` @@ -711,35 +719,35 @@ The `pg_stat_database` view will contain one row for each database in the cluste .**`pg_stat_database` View** |==== -| Column TypeDescription -| `datid` `oid`OID of this database, or 0 for objects belonging to a shared relation | -| `datname` `name`Name of this database, or `NULL` for shared objects. | -| `numbackends` `integer`Number of backends currently connected to this database, or `NULL` for shared objects. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset. | -| `xact_commit` `bigint`Number of transactions in this database that have been committed | -| `xact_rollback` `bigint`Number of transactions in this database that have been rolled back | -| `blks_read` `bigint`Number of disk blocks read in this database | -| `blks_hit` `bigint`Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the IvorySQL buffer cache, not the operating system's file system cache) | -| `tup_returned` `bigint`Number of live rows fetched by sequential scans and index entries returned by index scans in this database | -| `tup_fetched` `bigint`Number of live rows fetched by index scans in this database | -| `tup_inserted` `bigint`Number of rows inserted by queries in this database | -| `tup_updated` `bigint`Number of rows updated by queries in this database | -| `tup_deleted` `bigint`Number of rows deleted by queries in this database | -| `conflicts` `bigint`Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW[`pg_stat_database_conflicts`] for details.) | -| `temp_files` `bigint`Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES[log_temp_files] setting. | -| `temp_bytes` `bigint`Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES[log_temp_files] setting. | -| `deadlocks` `bigint`Number of deadlocks detected in this database | -| `checksum_failures` `bigint`Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled. | -| `checksum_last_failure` `timestamp with time zone`Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are not enabled. | -| `blk_read_time` `double precision`Time spent reading data file blocks by backends in this database, in milliseconds (if https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING[track_io_timing] is enabled, otherwise zero) | -| `blk_write_time` `double precision`Time spent writing data file blocks by backends in this database, in milliseconds (if https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING[track_io_timing] is enabled, otherwise zero) | -| `session_time` `double precision`Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions have been idle for a long time, this idle time won't be included) | -| `active_time` `double precision`Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states `active` and `fastpath function call` in https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW[`pg_stat_activity`]) | -| `idle_in_transaction_time` `double precision`Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states `idle in transaction` and `idle in transaction (aborted)` in https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW[`pg_stat_activity`]) -| `sessions` `bigint`Total number of sessions established to this database -| `sessions_abandoned` `bigint`Number of database sessions to this database that were terminated because connection to the client was lost -| `sessions_fatal` `bigint`Number of database sessions to this database that were terminated by fatal errors -| `sessions_killed` `bigint`Number of database sessions to this database that were terminated by operator intervention -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `datid` | `oid` | OID of this database, or 0 for objects belonging to a shared relation +| `datname` | `name` | Name of this database, or `NULL` for shared objects. +| `numbackends` | `integer` | Number of backends currently connected to this database, or `NULL` for shared objects. This is the only column in this view that returns a value reflecting current stateall other columns return the accumulated values since the last reset. +| `xact_commit` | `bigint` | Number of transactions in this database that have been committed +| `xact_rollback` | `bigint` | Number of transactions in this database that have been rolled back +| `blks_read` | `bigint` | Number of disk blocks read in this database +| `blks_hit` | `bigint` | Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the IvorySQL buffer cache, not the operatinsystem's file system cache) +| `tup_returned` | `bigint` | Number of live rows fetched by sequential scans and index entries returned by index scans in this database +| `tup_fetched` | `bigint` | Number of live rows fetched by index scans in this database +| `tup_inserted` | `bigint` | Number of rows inserted by queries in this database +| `tup_updated` | `bigint` | Number of rows updated by queries in this database +| `tup_deleted` | `bigint` | Number of rows deleted by queries in this database +| `conflicts` | `bigint` | Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; seepg_stat_database_conflicts](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW) for details.) +| `temp_files` | `bigint` | Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing)and regardless of the [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) setting. +| `temp_bytes` | `bigint` | Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardlesof the [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) setting. +| `deadlocks` | `bigint` | Number of deadlocks detected in this database +| `checksum_failures` | `bigint` | Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled. +| `checksum_last_failure` | `timestamp with time zone` | Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are noenabled. +| `blk_read_time` | `double precision` | Time spent reading data file blocks by backends in this database, in milliseconds (if [track_io_timing](https://www.postgresql.org/docs/currenruntime-config-statistics.html#GUC-TRACK-IO-TIMING) is enabled, otherwise zero) +| `blk_write_time` | `double precision` | Time spent writing data file blocks by backends in this database, in milliseconds (if [track_io_timing](https://www.postgresql.org/docs/currenruntime-config-statistics.html#GUC-TRACK-IO-TIMING) is enabled, otherwise zero) +| `session_time` | `double precision` | Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions havbeen idle for a long time, this idle time won't be included) +| `active_time` | `double precision` | Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states `active` and `fastpath function call` inpg_stat_activity](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)) +| `idle_in_transaction_time` | `double precision` | Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states `idle in transaction` and `idle itransaction (aborted)` in [pg_stat_activity](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)) +| `sessions` | `bigint` | Total number of sessions established to this database +| `sessions_abandoned` | `bigint` | Number of database sessions to this database that were terminated because connection to the client was lost +| `sessions_fatal` | `bigint` | Number of database sessions to this database that were terminated by fatal errors +| `sessions_killed` | `bigint` | Number of database sessions to this database that were terminated by operator intervention +| `stats_reset` | `timestamp with time zone` | Time at which these statistics were last reset |==== ==== `pg_stat_database_conflicts` @@ -748,14 +756,14 @@ The `pg_stat_database_conflicts` view will contain one row per database, showing .**`pg_stat_database_conflicts` View** |==== -| Column TypeDescription -| `datid` `oid`OID of a database -| `datname` `name`Name of this database -| `confl_tablespace` `bigint`Number of queries in this database that have been canceled due to dropped tablespaces -| `confl_lock` `bigint`Number of queries in this database that have been canceled due to lock timeouts -| `confl_snapshot` `bigint`Number of queries in this database that have been canceled due to old snapshots -| `confl_bufferpin` `bigint`Number of queries in this database that have been canceled due to pinned buffers -| `confl_deadlock` `bigint`Number of queries in this database that have been canceled due to deadlocks +| Column | Type | Description +| `datid` | `oid` | OID of a database +| `datname` | `name` | Name of this database +| `confl_tablespace` | `bigint` | Number of queries in this database that have been canceled due to dropped tablespaces +| `confl_lock` | `bigint` | Number of queries in this database that have been canceled due to lock timeouts +| `confl_snapshot` | `bigint` | Number of queries in this database that have been canceled due to old snapshots +| `confl_bufferpin` | `bigint` | Number of queries in this database that have been canceled due to pinned buffers +| `confl_deadlock` | `bigint` | Number of queries in this database that have been canceled due to deadlocks |==== ==== `pg_stat_all_tables` @@ -764,30 +772,30 @@ The `pg_stat_all_tables` view will contain one row for each table in the current .**`pg_stat_all_tables` View** |==== -| Column TypeDescription -| `relid` `oid`OID of a table -| `schemaname` `name`Name of the schema that this table is in -| `relname` `name`Name of this table -| `seq_scan` `bigint`Number of sequential scans initiated on this table -| `seq_tup_read` `bigint`Number of live rows fetched by sequential scans -| `idx_scan` `bigint`Number of index scans initiated on this table -| `idx_tup_fetch` `bigint`Number of live rows fetched by index scans -| `n_tup_ins` `bigint`Number of rows inserted -| `n_tup_upd` `bigint`Number of rows updated (includes https://www.postgresql.org/docs/current/storage-hot.html[HOT updated rows]) -| `n_tup_del` `bigint`Number of rows deleted -| `n_tup_hot_upd` `bigint`Number of rows HOT updated (i.e., with no separate index update required) -| `n_live_tup` `bigint`Estimated number of live rows -| `n_dead_tup` `bigint`Estimated number of dead rows -| `n_mod_since_analyze` `bigint`Estimated number of rows modified since this table was last analyzed -| `n_ins_since_vacuum` `bigint`Estimated number of rows inserted since this table was last vacuumed -| `last_vacuum` `timestamp with time zone`Last time at which this table was manually vacuumed (not counting `VACUUM FULL`) -| `last_autovacuum` `timestamp with time zone`Last time at which this table was vacuumed by the autovacuum daemon -| `last_analyze` `timestamp with time zone`Last time at which this table was manually analyzed -| `last_autoanalyze` `timestamp with time zone`Last time at which this table was analyzed by the autovacuum daemon -| `vacuum_count` `bigint`Number of times this table has been manually vacuumed (not counting `VACUUM FULL`) -| `autovacuum_count` `bigint`Number of times this table has been vacuumed by the autovacuum daemon -| `analyze_count` `bigint`Number of times this table has been manually analyzed -| `autoanalyze_count` `bigint`Number of times this table has been analyzed by the autovacuum daemon +| Column | Type | Description +| `relid` | `oid` | OID of a table +| `schemaname` | `name` | Name of the schema that this table is in +| `relname` | `name` | Name of this table +| `seq_scan` | `bigint` | Number of sequential scans initiated on this table +| `seq_tup_read` | `bigint` | Number of live rows fetched by sequential scans +| `idx_scan` | `bigint` | Number of index scans initiated on this table +| `idx_tup_fetch` | `bigint` | Number of live rows fetched by index scans +| `n_tup_ins` | `bigint` | Number of rows inserted +| `n_tup_upd` | `bigint` | Number of rows updated (includes HOT updated rows) +| `n_tup_del` | `bigint` | Number of rows deleted +| `n_tup_hot_upd` | `bigint` | Number of rows HOT updated (i.e., with no separate index update required) +| `n_live_tup` | `bigint` | Estimated number of live rows +| `n_dead_tup` | `bigint` | Estimated number of dead rows +| `n_mod_since_analyze` | `bigint` | Estimated number of rows modified since this table was last analyzed +| `n_ins_since_vacuum` | `bigint` | Estimated number of rows inserted since this table was last vacuumed +| `last_vacuum` | `timestamp with time zone` | Last time at which this table was manually vacuumed (not counting VACUUM FULL) +| `last_autovacuum` | `timestamp with time zone` | Last time at which this table was vacuumed by the autovacuum daemon +| `last_analyze` | `timestamp with time zone` | Last time at which this table was manually analyzed +| `last_autoanalyze` | `timestamp with time zone` | Last time at which this table was analyzed by the autovacuum daemon +| `vacuum_count` | `bigint` | Number of times this table has been manually vacuumed (not counting VACUUM FULL) +| `autovacuum_count` | `bigint` | Number of times this table has been vacuumed by the autovacuum daemon +| `analyze_count` | `bigint` | Number of times this table has been manually analyzed +| `autoanalyze_count` | `bigint` | Number of times this table has been analyzed by the autovacuum daemon |==== ==== `pg_stat_all_indexes` @@ -796,15 +804,15 @@ The `pg_stat_all_indexes` view will contain one row for each index in the curren .**`pg_stat_all_indexes` View** |==== -| Column TypeDescription -| `relid` `oid`OID of the table for this index -| `indexrelid` `oid`OID of this index -| `schemaname` `name`Name of the schema this index is in -| `relname` `name`Name of the table for this index -| `indexrelname` `name`Name of this index -| `idx_scan` `bigint`Number of index scans initiated on this index -| `idx_tup_read` `bigint`Number of index entries returned by scans on this index -| `idx_tup_fetch` `bigint`Number of live table rows fetched by simple index scans using this index +| Column | Type | Description +| `relid` | `oid` | OID of the table for this index +| `indexrelid` | `oid` | OID of this index +| `schemaname` | `name` | Name of the schema this index is in +| `relname` | `name` | Name of the table for this index +| `indexrelname` | `name` | Name of this index +| `idx_scan` | `bigint` | Number of index scans initiated on this index +| `idx_tup_read` | `bigint` | Number of index entries returned by scans on this index +| `idx_tup_fetch` | `bigint` | Number of live table rows fetched by simple index scans using this index |==== Indexes can be used by simple index scans, “bitmap” index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the `pg_stat_all_indexes`.`idx_tup_read` count(s) for the index(es) it uses, and it increments the `pg_stat_all_tables`.`idx_tup_fetch` count for the table, but it does not affect `pg_stat_all_indexes`.`idx_tup_fetch`. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale. @@ -820,18 +828,18 @@ The `pg_statio_all_tables` view will contain one row for each table in the curre .**`pg_statio_all_tables` View** |==== -| Column TypeDescription -| `relid` `oid`OID of a table -| `schemaname` `name`Name of the schema that this table is in -| `relname` `name`Name of this table -| `heap_blks_read` `bigint`Number of disk blocks read from this table -| `heap_blks_hit` `bigint`Number of buffer hits in this table -| `idx_blks_read` `bigint`Number of disk blocks read from all indexes on this table -| `idx_blks_hit` `bigint`Number of buffer hits in all indexes on this table -| `toast_blks_read` `bigint`Number of disk blocks read from this table's TOAST table (if any) -| `toast_blks_hit` `bigint`Number of buffer hits in this table's TOAST table (if any) -| `tidx_blks_read` `bigint`Number of disk blocks read from this table's TOAST table indexes (if any) -| `tidx_blks_hit` `bigint`Number of buffer hits in this table's TOAST table indexes (if any) +| Column | Type | Description +| `relid` | `oid` | OID of a table +| `schemaname` | `name` | Name of the schema that this table is in +| `relname` | `name` | Name of this table +| `heap_blks_read` | `bigint` | Number of disk blocks read from this table +| `heap_blks_hit` | `bigint` | Number of buffer hits in this table +| `idx_blks_read` | `bigint` | Number of disk blocks read from all indexes on this table +| `idx_blks_hit` | `bigint` | Number of buffer hits in all indexes on this table +| `toast_blks_read` | `bigint` | Number of disk blocks read from this table's TOAST table (if any) +| `toast_blks_hit` | `bigint` | Number of buffer hits in this table's TOAST table (if any) +| `tidx_blks_read` | `bigint` | Number of disk blocks read from this table's TOAST table indexes (if any) +| `tidx_blks_hit` | `bigint` | Number of buffer hits in this table's TOAST table indexes (if any) |==== ==== `pg_statio_all_indexes` @@ -840,14 +848,14 @@ The `pg_statio_all_indexes` view will contain one row for each index in the curr .**`pg_statio_all_indexes` View** |==== -| Column TypeDescription -| `relid` `oid`OID of the table for this index -| `indexrelid` `oid`OID of this index -| `schemaname` `name`Name of the schema this index is in -| `relname` `name`Name of the table for this index -| `indexrelname` `name`Name of this index -| `idx_blks_read` `bigint`Number of disk blocks read from this index -| `idx_blks_hit` `bigint`Number of buffer hits in this index +| Column | Type | Description +| `relid` | `oid` | OID of the table for this index +| `indexrelid` | `oid` | OID of this index +| `schemaname` | `name` | Name of the schema this index is in +| `relname` | `name` | Name of the table for this index +| `indexrelname` | `name` | Name of this index +| `idx_blks_read` | `bigint` | Number of disk blocks read from this index +| `idx_blks_hit` | `bigint` | Number of buffer hits in this index |==== ==== `pg_statio_all_sequences` @@ -856,12 +864,12 @@ The `pg_statio_all_sequences` view will contain one row for each sequence in the .**`pg_statio_all_sequences` View** |==== -| Column TypeDescription -| `relid` `oid`OID of a sequence -| `schemaname` `name`Name of the schema this sequence is in -| `relname` `name`Name of this sequence -| `blks_read` `bigint`Number of disk blocks read from this sequence -| `blks_hit` `bigint`Number of buffer hits in this sequence +| Column | Type | Description +| `relid` | `oid` | OID of a sequence +| `schemaname` | `name` | Name of the schema this sequence is in +| `relname` | `name` | Name of this sequence +| `blks_read` | `bigint` | Number of disk blocks read from this sequence +| `blks_hit` | `bigint` | Number of buffer hits in this sequence |==== ==== `pg_stat_user_functions` @@ -870,13 +878,13 @@ The `pg_stat_user_functions` view will contain one row for each tracked function .**`pg_stat_user_functions` View** |==== -| Column TypeDescription -| `funcid` `oid`OID of a function -| `schemaname` `name`Name of the schema this function is in -| `funcname` `name`Name of this function -| `calls` `bigint`Number of times this function has been called -| `total_time` `double precision`Total time spent in this function and all other functions called by it, in milliseconds -| `self_time` `double precision`Total time spent in this function itself, not including other functions called by it, in milliseconds +| Column | Type | Description +| `funcid` | `oid` | OID of a function +| `schemaname` | `name` | Name of the schema this function is in +| `funcname` | `name` | Name of this function +| `calls` | `bigint` | Number of times this function has been called +| `total_time` | `double precision` | Total time spent in this function and all other functions called by it, in milliseconds +| `self_time` | `double precision` | Total time spent in this function itself, not including other functions called by it, in milliseconds |==== ==== `pg_stat_slru` @@ -885,16 +893,16 @@ IvorySQL accesses certain on-disk information via *SLRU* (simple least-recently- .**`pg_stat_slru` View** |==== -| Column TypeDescription -| `name` `text`Name of the SLRU -| `blks_zeroed` `bigint`Number of blocks zeroed during initializations -| `blks_hit` `bigint`Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system's file system cache) -| `blks_read` `bigint`Number of disk blocks read for this SLRU -| `blks_written` `bigint`Number of disk blocks written for this SLRU -| `blks_exists` `bigint`Number of blocks checked for existence for this SLRU -| `flushes` `bigint`Number of flushes of dirty data for this SLRU -| `truncates` `bigint`Number of truncates for this SLRU -| `stats_reset` `timestamp with time zone`Time at which these statistics were last reset +| Column | Type | Description +| `name` | `text` | Name of the SLRU +| `blks_zeroed` | `bigint` | Number of blocks zeroed during initializations +| `blks_hit` | `bigint` | Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system's file system cache) +| `blks_read` | `bigint` | Number of disk blocks read for this SLRU +| `blks_written` | `bigint` | Number of disk blocks written for this SLRU +| `blks_exists` | `bigint` | Number of blocks checked for existence for this SLRU +| `flushes` | `bigint` | Number of flushes of dirty data for this SLRU +| `truncates` | `bigint` | Number of truncates for this SLRU +| `stats_reset` | `timestamp with time zone` | Time at which these statistics were last reset |==== ==== Statistics Functions @@ -903,18 +911,18 @@ Other ways of looking at the statistics can be set up by writing queries that us .**Additional Statistics Functions** |==== -| FunctionDescription -| `pg_backend_pid` () → `integer`Returns the process ID of the server process attached to the current session. -| `pg_stat_get_activity` ( `integer` ) → `setof record`Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if `NULL` is specified. The fields returned are a subset of those in the `pg_stat_activity` view. -| `pg_stat_get_snapshot_timestamp` () → `timestamp with time zone`Returns the timestamp of the current statistics snapshot, or NULL if no statistics snapshot has been taken. A snapshot is taken the first time cumulative statistics are accessed in a transaction if `stats_fetch_consistency` is set to `snapshot` -| `pg_stat_clear_snapshot` () → `void`Discards the current statistics snapshot or cached information. -| `pg_stat_reset` () → `void`Resets all statistics counters for the current database to zero.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. -| `pg_stat_reset_shared` ( `text` ) → `void`Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be `bgwriter` to reset all the counters shown in the `pg_stat_bgwriter` view, `archiver` to reset all the counters shown in the `pg_stat_archiver` view, `wal` to reset all the counters shown in the `pg_stat_wal` view or `recovery_prefetch` to reset all the counters shown in the `pg_stat_recovery_prefetch` view.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. -| `pg_stat_reset_single_table_counters` ( `oid` ) → `void`Resets statistics for a single table or index in the current database or shared across all databases in the cluster to zero.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. -| `pg_stat_reset_single_function_counters` ( `oid` ) → `void`Resets statistics for a single function in the current database to zero.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. -| `pg_stat_reset_slru` ( `text` ) → `void`Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the `pg_stat_slru` view for all SLRU caches are reset. The argument can be one of `CommitTs`, `MultiXactMember`, `MultiXactOffset`, `Notify`, `Serial`, `Subtrans`, or `Xact` to reset the counters for only that entry. If the argument is `other` (or indeed, any unrecognized name), then the counters for all other SLRU caches, such as extension-defined caches, are reset.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. -| `pg_stat_reset_replication_slot` ( `text` ) → `void`Resets statistics of the replication slot defined by the argument. If the argument is `NULL`, resets statistics for all the replication slots.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. -| `pg_stat_reset_subscription_stats` ( `oid` ) → `void`Resets statistics for a single subscription shown in the `pg_stat_subscription_stats` view to zero. If the argument is `NULL`, reset statistics for all subscriptions.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| Function | Description +| `pg_backend_pid` () → `integer` | Returns the process ID of the server process attached to the current session. +| `pg_stat_get_activity` ( `integer` ) → `setof record` | Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if `NULL` is specified. The fields returned are a subset of those in the `pg_stat_activity` view. +| `pg_stat_get_snapshot_timestamp` () → `timestamp with time zone` | Returns the timestamp of the current statistics snapshot, or NULL if no statistics snapshot has been taken. A snapshot is taken the first time cumulative statistics are accessed in a transaction if `stats_fetch_consistency` is set to `snapshot`. +| `pg_stat_clear_snapshot` () → `void` | Discards the current statistics snapshot or cached information. +| `pg_stat_reset` () → `void` | Resets all statistics counters for the current database to zero. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| `pg_stat_reset_shared` ( `text` ) → `void` | Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be `bgwriter`, `archiver`, `wal`, or `recovery_prefetch` to reset the counters shown in the respective views. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| `pg_stat_reset_single_table_counters` ( `oid` ) → `void` | Resets statistics for a single table or index in the current database or shared across all databases in the cluster to zero. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| `pg_stat_reset_single_function_counters` ( `oid` ) → `void` | Resets statistics for a single function in the current database to zero. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| `pg_stat_reset_slru` ( `text` ) → `void` | Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the `pg_stat_slru` view for all SLRU caches are reset. The argument can be one of `CommitTs`, `MultiXactMember`, `MultiXactOffset`, `Notify`, `Serial`, `Subtrans`, or `Xact`. If the argument is `other` (or any unrecognized name), then the counters for all other SLRU caches, such as extension-defined caches, are reset. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| `pg_stat_reset_replication_slot` ( `text` ) → `void` | Resets statistics of the replication slot defined by the argument. If the argument is `NULL`, resets statistics for all the replication slots. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. +| `pg_stat_reset_subscription_stats` ( `oid` ) → `void` | Resets statistics for a single subscription shown in the `pg_stat_subscription_stats` view to zero. If the argument is `NULL`, reset statistics for all subscriptions. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |==== .Warning @@ -932,19 +940,19 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, .**Per-Backend Statistics Functions** |==== -| FunctionDescription -| `pg_stat_get_backend_idset` () → `setof integer`Returns the set of currently active backend ID numbers (from 1 to the number of active backends). -| `pg_stat_get_backend_activity` ( `integer` ) → `text`Returns the text of this backend's most recent query. -| `pg_stat_get_backend_activity_start` ( `integer` ) → `timestamp with time zone`Returns the time when the backend's most recent query was started. -| `pg_stat_get_backend_client_addr` ( `integer` ) → `inet`Returns the IP address of the client connected to this backend. -| `pg_stat_get_backend_client_port` ( `integer` ) → `integer`Returns the TCP port number that the client is using for communication. -| `pg_stat_get_backend_dbid` ( `integer` ) → `oid`Returns the OID of the database this backend is connected to. -| `pg_stat_get_backend_pid` ( `integer` ) → `integer`Returns the process ID of this backend. -| `pg_stat_get_backend_start` ( `integer` ) → `timestamp with time zone`Returns the time when this process was started. -| `pg_stat_get_backend_userid` ( `integer` ) → `oid`Returns the OID of the user logged into this backend. -| `pg_stat_get_backend_wait_event_type` ( `integer` ) → `text`Returns the wait event type name if this backend is currently waiting, otherwise NULL. -| `pg_stat_get_backend_wait_event` ( `integer` ) → `text`Returns the wait event name if this backend is currently waiting, otherwise NULL. -| `pg_stat_get_backend_xact_start` ( `integer` ) → `timestamp with time zone`Returns the time when the backend's current transaction was started. +| Function | Description +| `pg_stat_get_backend_idset` () → `setof integer` | Returns the set of currently active backend ID numbers (from 1 to the number of active backends). +| `pg_stat_get_backend_activity` ( `integer` ) → `text` | Returns the text of this backend's most recent query. +| `pg_stat_get_backend_activity_start` ( `integer` ) → `timestamp with time zone` | Returns the time when the backend's most recent query was started. +| `pg_stat_get_backend_client_addr` ( `integer` ) → `inet` | Returns the IP address of the client connected to this backend. +| `pg_stat_get_backend_client_port` ( `integer` ) → `integer` | Returns the TCP port number that the client is using for communication. +| `pg_stat_get_backend_dbid` ( `integer` ) → `oid` | Returns the OID of the database this backend is connected to. +| `pg_stat_get_backend_pid` ( `integer` ) → `integer` | Returns the process ID of this backend. +| `pg_stat_get_backend_start` ( `integer` ) → `timestamp with time zone` | Returns the time when this process was started. +| `pg_stat_get_backend_userid` ( `integer` ) → `oid` | Returns the OID of the user logged into this backend. +| `pg_stat_get_backend_wait_event_type` ( `integer` ) → `text` | Returns the wait event type name if this backend is currently waiting, otherwise NULL. +| `pg_stat_get_backend_wait_event` ( `integer` ) → `text` | Returns the wait event name if this backend is currently waiting, otherwise NULL. +| `pg_stat_get_backend_xact_start` ( `integer` ) → `timestamp with time zone` | Returns the time when the backend's current transaction was started. |==== === View Locks @@ -965,19 +973,19 @@ Whenever `ANALYZE` is running, the `pg_stat_progress_analyze` view will contain .**`pg_stat_progress_analyze` View** |==== -| Column TypeDescription -| `pid` `integer`Process ID of backend. -| `datid` `oid`OID of the database to which this backend is connected. -| `datname` `name`Name of the database to which this backend is connected. -| `relid` `oid`OID of the table being analyzed. -| `phase` `text`Current processing phase. See https://www.postgresql.org/docs/current/progress-reporting.html#ANALYZE-PHASES[Table 1.37]. -| `sample_blks_total` `bigint`Total number of heap blocks that will be sampled. -| `sample_blks_scanned` `bigint`Number of heap blocks scanned. -| `ext_stats_total` `bigint`Number of extended statistics. -| `ext_stats_computed` `bigint`Number of extended statistics computed. This counter only advances when the phase is `computing extended statistics`. -| `child_tables_total` `bigint`Number of child tables. -| `child_tables_done` `bigint`Number of child tables scanned. This counter only advances when the phase is `acquiring inherited sample rows`. -| `current_child_table_relid` `oid`OID of the child table currently being scanned. This field is only valid when the phase is `acquiring inherited sample rows`. +| Column | Type | Description | +| `pid` | `integer` | Process ID of backend. | +| `datid` | `oid` | OID of the database to which this backend is connected. | +| `datname` | `name` | Name of the database to which this backend is connected. | +| `relid` | `oid` | OID of the table being analyzed. | +| `phase` | `text` | Current processing phase. See https://www.postgresql.org/docs/current/progress-reporting.html#ANALYZE-PHASES[Table 1.37]. | +| `sample_blks_total` | `bigint` | Total number of heap blocks that will be sampled. | +| `sample_blks_scanned` | `bigint` | Number of heap blocks scanned. | +| `ext_stats_total` | `bigint` | Number of extended statistics. | +| `ext_stats_computed` | `bigint` | Number of extended statistics computed. This counter only advances when the phase is `computing extended statistics`. | +| `child_tables_total` | `bigint` | Number of child tables. | +| `child_tables_done` | `bigint` | Number of child tables scanned. This counter only advances when the phase is `acquiring inherited sample rows`. | +| `current_child_table_relid` | `oid` | OID of the child table currently being scanned. This field is only valid when the phase is `acquiring inherited sample rows`. | |==== .**ANALYZE Phases** @@ -1002,23 +1010,23 @@ Whenever `CREATE INDEX` or `REINDEX` is running, the `pg_stat_progress_create_in .**`pg_stat_progress_create_index` View** |==== -| Column TypeDescription -| `pid` `integer`Process ID of backend. -| `datid` `oid`OID of the database to which this backend is connected. -| `datname` `name`Name of the database to which this backend is connected. -| `relid` `oid`OID of the table on which the index is being created. -| `index_relid` `oid`OID of the index being created or reindexed. During a non-concurrent `CREATE INDEX`, this is 0. -| `command` `text`The command that is running: `CREATE INDEX`, `CREATE INDEX CONCURRENTLY`, `REINDEX`, or `REINDEX CONCURRENTLY`. -| `phase` `text`Current processing phase of index creation. See https://www.postgresql.org/docs/current/progress-reporting.html#CREATE-INDEX-PHASES[Table 1.39]. -| `lockers_total` `bigint`Total number of lockers to wait for, when applicable. -| `lockers_done` `bigint`Number of lockers already waited for. -| `current_locker_pid` `bigint`Process ID of the locker currently being waited for. -| `blocks_total` `bigint`Total number of blocks to be processed in the current phase. -| `blocks_done` `bigint`Number of blocks already processed in the current phase. -| `tuples_total` `bigint`Total number of tuples to be processed in the current phase. -| `tuples_done` `bigint`Number of tuples already processed in the current phase. -| `partitions_total` `bigint`When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is `0` during a `REINDEX`. -| `partitions_done` `bigint`When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is `0` during a `REINDEX`. +| Column | Type | Description | +| `pid` | `integer` | Process ID of backend. | +| `datid` | `oid` | OID of the database to which this backend is connected. | +| `datname` | `name` | Name of the database to which this backend is connected. | +| `relid` | `oid` | OID of the table on which the index is being created. | +| `index_relid` | `oid` | OID of the index being created or reindexed. During a non-concurrent `CREATE INDEX`, this is 0. | +| `command` | `text` | The command that is running: `CREATE INDEX`, `CREATE INDEX CONCURRENTLY`, `REINDEX`, or `REINDEX CONCURRENTLY`. | +| `phase` | `text` | Current processing phase of index creation. See https://www.postgresql.org/docs/current/progress-reporting.html#CREATE-INDEX-PHASES[Table 1.39]. | +| `lockers_total` | `bigint` | Total number of lockers to wait for, when applicable. | +| `lockers_done` | `bigint` | Number of lockers already waited for. | +| `current_locker_pid` | `bigint` | Process ID of the locker currently being waited for. | +| `blocks_total` | `bigint` | Total number of blocks to be processed in the current phase. | +| `blocks_done` | `bigint` | Number of blocks already processed in the current phase. | +| `tuples_total` | `bigint` | Total number of tuples to be processed in the current phase. | +| `tuples_done` | `bigint` | Number of tuples already processed in the current phase. | +| `partitions_total` | `bigint` | When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is `0` during a `REINDEX`. | +| `partitions_done` | `bigint` | When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is `0` during a `REINDEX`. | |==== .**CREATE INDEX Phases** @@ -1042,18 +1050,18 @@ Whenever `VACUUM` is running, the `pg_stat_progress_vacuum` view will contain on .**`pg_stat_progress_vacuum` View** |==== -| Column TypeDescription -| `pid` `integer`Process ID of backend. -| `datid` `oid`OID of the database to which this backend is connected. -| `datname` `name`Name of the database to which this backend is connected. -| `relid` `oid`OID of the table being vacuumed. -| `phase` `text`Current processing phase of vacuum. -| `heap_blks_total` `bigint`Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and need not be) visited by this `VACUUM`. -| `heap_blks_scanned` `bigint`Number of heap blocks scanned. Because the https://www.postgresql.org/docs/current/storage-vm.html[visibility map] is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become equal to `heap_blks_total` when the vacuum is complete. This counter only advances when the phase is `scanning heap`. -| `heap_blks_vacuumed` `bigint`Number of heap blocks vacuumed. Unless the table has no indexes, this counter only advances when the phase is `vacuuming heap`. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments. -| `index_vacuum_count` `bigint`Number of completed index vacuum cycles. -| `max_dead_tuples` `bigint`Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM[maintenance_work_mem]. -| `num_dead_tuples` `bigint`Number of dead tuples collected since the last index vacuum cycle. +| Column | Type | Description | +| `pid` | `integer` | Process ID of backend. | +| `datid` | `oid` | OID of the database to which this backend is connected. | +| `datname` | `name` | Name of the database to which this backend is connected. | +| `relid` | `oid` | OID of the table being vacuumed. | +| `phase` | `text` | Current processing phase of vacuum. | +| `heap_blks_total` | `bigint` | Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and need not be) visited by this `VACUUM`. | +| `heap_blks_scanned` | `bigint` | Number of heap blocks scanned. Because the https://www.postgresql.org/docs/current/storage-vm.html[visibility map] is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become equal to `heap_blks_total` when the vacuum is complete. This counter only advances when the phase is `scanning heap`. | +| `heap_blks_vacuumed` | `bigint` | Number of heap blocks vacuumed. Unless the table has no indexes, this counter only advances when the phase is `vacuuming heap`. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments. | +| `index_vacuum_count` | `bigint` | Number of completed index vacuum cycles. | +| `max_dead_tuples` | `bigint` | Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM[maintenance_work_mem]. | +| `num_dead_tuples` | `bigint` | Number of dead tuples collected since the last index vacuum cycle. | |==== .**VACUUM Phases** @@ -1074,19 +1082,19 @@ Whenever `CLUSTER` or `VACUUM FULL` is running, the `pg_stat_progress_cluster` v .**`pg_stat_progress_cluster` View** |==== -| Column TypeDescriptio -| `pid` `integer`Process ID of backend. -| `datid` `oid`OID of the database to which this backend is connected. -| `datname` `name`Name of the database to which this backend is connected. -| `relid` `oid`OID of the table being clustered. -| `command` `text`The command that is running. Either `CLUSTER` or `VACUUM FULL`. -| `phase` `text`Current processing phase. See https://www.postgresql.org/docs/current/progress-reporting.html#CLUSTER-PHASES[Table 1.43]. -| `cluster_index_relid` `oid`If the table is being scanned using an index, this is the OID of the index being used; otherwise, it is zero. -| `heap_tuples_scanned` `bigint`Number of heap tuples scanned. This counter only advances when the phase is `seq scanning heap`, `index scanning heap` or `writing new heap`. -| `heap_tuples_written` `bigint`Number of heap tuples written. This counter only advances when the phase is `seq scanning heap`, `index scanning heap` or `writing new heap`. -| `heap_blks_total` `bigint`Total number of heap blocks in the table. This number is reported as of the beginning of `seq scanning heap`. -| `heap_blks_scanned` `bigint`Number of heap blocks scanned. This counter only advances when the phase is `seq scanning heap`. -| `index_rebuild_count` `bigint`Number of indexes rebuilt. This counter only advances when the phase is `rebuilding index`. +| Column | Type | Description | +| `pid` | `integer` | Process ID of backend. | +| `datid` | `oid` | OID of the database to which this backend is connected. | +| `datname` | `name` | Name of the database to which this backend is connected. | +| `relid` | `oid` | OID of the table being clustered. | +| `command` | `text` | The command that is running. Either `CLUSTER` or `VACUUM FULL`. | +| `phase` | `text` | Current processing phase. See https://www.postgresql.org/docs/current/progress-reporting.html#CLUSTER-PHASES[Table 1.43]. | +| `cluster_index_relid` | `oid` | If the table is being scanned using an index, this is the OID of the index being used; otherwise, it is zero. | +| `heap_tuples_scanned` | `bigint` | Number of heap tuples scanned. This counter only advances when the phase is `seq scanning heap`, `index scanning heap` or `writing new heap`. | +| `heap_tuples_written` | `bigint` | Number of heap tuples written. This counter only advances when the phase is `seq scanning heap`, `index scanning heap` or `writing new heap`. | +| `heap_blks_total` | `bigint` | Total number of heap blocks in the table. This number is reported as of the beginning of `seq scanning heap`. | +| `heap_blks_scanned` | `bigint` | Number of heap blocks scanned. This counter only advances when the phase is `seq scanning heap`. | +| `index_rebuild_count` | `bigint` | Number of indexes rebuilt. This counter only advances when the phase is `rebuilding index`. | |==== .**CLUSTER and VACUUM FULL Phases** diff --git a/EN/modules/ROOT/pages/master/4.1.adoc b/EN/modules/ROOT/pages/master/4.1.adoc index 7bbf027..aad96b6 100644 --- a/EN/modules/ROOT/pages/master/4.1.adoc +++ b/EN/modules/ROOT/pages/master/4.1.adoc @@ -15,7 +15,7 @@ The installation methods for IvorySQL include the following five: - <> -This chapter will provide detailed instructions on the installation, execution, and uninstallation processes for each method. For a quicker access to IvorySQL, please refer to xref:v4.5/3.adoc#quick-installation[Quick installation]. +This chapter will provide detailed instructions on the installation, execution, and uninstallation processes for each method. For a quicker access to IvorySQL, please refer to xref:v5.0/3.adoc#quick-installation[Quick installation]. Before getting started, please create an user and grant it root privileges. All the installation steps will be performed by this user. Here we just name it 'ivorysql'. @@ -25,49 +25,28 @@ Before getting started, please create an user and grant it root privileges. All Create or edit IvorySQL yum repository configuration /etc/yum.repos.d/ivorysql.repo ``` vim /etc/yum.repos.d/ivorysql.repo -[ivorysql4] -name=IvorySQL Server 4 $releasever - $basearch -baseurl=https://yum.highgo.com/dists/ivorysql-rpms/4/redhat/rhel-$releasever-$basearch +[ivorysql5] +name=IvorySQL Server 5 $releasever - $basearch +baseurl=https://yum.highgo.com/dists/ivorysql-rpms/5/redhat/rhel-$releasever-$basearch enabled=1 gpgcheck=0 ``` After saving and exiting, you can install IvorySQL 4 with the following steps ``` -$ sudo dnf install -y IvorySQL-4.5 +$ sudo dnf install -y ivorysql5-5.0 ``` -** Checking installation results -``` -dnf search IvorySQL -``` -Details: -|==== -| id | Package name | Description -| 1 | ivorysql4.x86_64 | IvorySQL client programs and lib files -| 2 | ivorysql4-contrib.x86_64 | Contributed source code and binary files released with IvorySQL -| 3 | ivorysql4-devel.x86_64 | IvorySQL development header files and libraries -| 4 | ivorysql4-docs.x86_64 | Additional docs for IvorySQL -| 5 | ivorysql4-libs.x86_64 | Shared libraries required by all IvorySQL clients -| 6 | ivorysql4-llvmjit.x86_64 | Instant compilation support for IvorySQL -| 7 | ivorysql4-plperl.x86_64 | Perl, a procedural language for IvorySQL -| 8 | ivorysql4-plpython3.x86_64 | Python3, a procedural language for IvorySQL -| 9 | ivorysql4-pltcl.x86_64 | Tcl, a procedural language for IvorySQL -| 10 | ivorysql4-server.x86_64 | The programs required to create and run an IvorySQL server -| 11 | Ivorysql4-test.x86_64 | Test suite released with IvorySQL -| 12 | ivorysql-release.noarch | Yum Source Configuration RPM Package of HighGo -|==== - [[Docker-installation]] == Docker installation ** Get IvorySQL image from Docker Hub ``` -$ docker pull ivorysql/ivorysql:4.5-ubi8 +$ docker pull ivorysql/ivorysql:5.0-ubi8 ``` ** Run IvorySQL ``` -$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:4.5-ubi8 +$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:5.0-ubi8 ``` -e Parameter Explanation |==== @@ -95,7 +74,7 @@ $ sudo dnf install -y lz4 libicu libxslt python3 ``` ** Getting rpms ``` -$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_4.5/IvorySQL-4.5-a50789d-20250304.x86_64.rpm +$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.x86_64.rpm ``` ** Installing rpms @@ -105,7 +84,7 @@ Use the following command to install all the rpms: ``` $ sudo yum --disablerepo=* localinstall *.rpm ``` -IvorySQL then will be installed in the /opt/IvorySQL-4.5/ directory. +IvorySQL then will be installed in the /usr/ivory-5/ directory. [[Source-code-installation]] == Source code installation @@ -118,7 +97,7 @@ $ sudo dnf groupinstall -y 'Development Tools' ``` $ git clone https://github.com/IvorySQL/IvorySQL.git $ cd IvorySQL -$ git checkout -b IVORY_REL_4_STABLE origin/IVORY_REL_4_STABLE +$ git checkout -b IVORY_REL_5_STABLE origin/IVORY_REL_5_STABLE ``` ** Configuring @@ -126,7 +105,7 @@ $ git checkout -b IVORY_REL_4_STABLE origin/IVORY_REL_4_STABLE In the IvorySQL directory run the following command with --prefix to specify the directory where you want the database to be installed: ``` -$ ./configure --prefix=/usr/local/ivorysql/ivorysql-4 +$ ./configure --prefix=/usr/local/ivorysql/ivorysql-5 ``` ** Compiling @@ -160,23 +139,23 @@ $ sudo apt -y install pkg-config libreadline-dev libicu-dev libldap2-dev uuid-de ** Getting deb ``` -$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_4.5/IvorySQL-4.5-a50789d-20250304.amd64.deb +$ sudo wget https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-a50789d-20250304.amd64.deb ``` ** Installing deb ``` -$ sudo dpkg -i IvorySQL-4.5-a50789d-20250304.amd64.deb +$ sudo dpkg -i IvorySQL-5.0-a50789d-20250304.amd64.deb ``` -IvorySQL will then be installed in the /opt/IvorySQL-4.5/ directory. +IvorySQL will then be installed in the /usr/ivory-5/ directory. == Start Database Users following the instructions in <>, <>, <> and <> need to manually start the database. ** Granting permissions -Execute the following command to grant permissions to the installation user. The example user is ivorysql, and the installation directory is /opt/IvorySQL-4.5/: +Execute the following command to grant permissions to the installation user. The example user is ivorysql, and the installation directory is /usr/ivory-5/: ``` -$ sudo chown -R ivorysql:ivorysql /opt/IvorySQL-4.5/ +$ sudo chown -R ivorysql:ivorysql /usr/ivory-5/ ``` [[setting-environment-variables]] ** Setting environment variables @@ -185,9 +164,9 @@ $ sudo chown -R ivorysql:ivorysql /opt/IvorySQL-4.5/ Add below contents in ~/.bash_profile file and source to make it effective: ``` -PATH=/opt/IvorySQL-4.5/bin:$PATH +PATH=/usr/ivory-5/bin:$PATH export PATH -PGDATA=/opt/IvorySQL-4.5/data +PGDATA=/usr/ivory-5/data export PGDATA ``` ``` @@ -197,8 +176,8 @@ $ source ~/.bash_profile ** Initializing database ``` -$ mkdir /opt/IvorySQL-4.5/data -$ initdb -D /opt/IvorySQL-4.5/data +$ mkdir /usr/ivory-5/data +$ initdb -D /usr/ivory-5/data ``` .... The -D option specifies the directory where the database cluster should be stored. This is the only information required by initdb, but you can avoid writing it by setting the PGDATA environment variable, which can be convenient since the database server can find the database directory later by the same variable. @@ -209,7 +188,7 @@ $ initdb -D /opt/IvorySQL-4.5/data ** Starting IvorySQL service ``` -$ pg_ctl -D /opt/IvorySQL-4.5/data -l ivory.log start +$ pg_ctl -D /usr/ivory-5/data -l ivory.log start ``` The -D option specifies the file system location of the database configuration files. If this option is omitted, the environment variable PGDATA in <> is used. -l option appends the server log output to filename. If the file does not exist, it is created. @@ -220,7 +199,7 @@ For more options, refer to pg_ctl --help. Confirm it's successfully started: ``` $ ps -ef | grep postgres -ivorysql 130427 1 0 02:45 ? 00:00:00 /opt/IvorySQL-4.5/bin/postgres -D /opt/IvorySQL-4.5/data +ivorysql 130427 1 0 02:45 ? 00:00:00 /usr/ivory-5/bin/postgres -D /usr/ivory-5/data ivorysql 130428 130427 0 02:45 ? 00:00:00 postgres: checkpointer ivorysql 130429 130427 0 02:45 ? 00:00:00 postgres: background writer ivorysql 130431 130427 0 02:45 ? 00:00:00 postgres: walwriter @@ -234,7 +213,7 @@ ivorysql 130445 130274 0 02:45 pts/1 00:00:00 grep --color=auto postgres Connect to IovrySQL via psql: ``` $ psql -d -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# @@ -258,8 +237,7 @@ No matter which method is used for the uninstallation, make sure the service has Run the following commands in turn and clean the residual folders: ``` -$ sudo dnf remove -y IvorySQL-4.5 -$ sudo rpm -e ivorysql-release-4.2-1.noarch +$ sudo dnf remove -y ivorysql5-5.0 ``` === Uninstallation for docker installation @@ -268,15 +246,15 @@ Stop IvorySQL container and remove IvorySQL image: ``` $ docker stop ivorysql $ docker rm ivorysql -$ docker rmi ivorysql/ivorysql:4.5-ubi8 +$ docker rmi ivorysql/ivorysql:5.0-ubi8 ``` === Uninstallation for rpm installation Uninstall the rpms and clear the residual folders: ``` -$ sudo yum remove --disablerepo=* ivorysql4\* -$ sudo rm -rf IvorySQL-4.5 +$ sudo yum remove --disablerepo=* ivorysql5\* +$ sudo rm -rf IvorySQL-5.0 ``` === Uninstallation for source code installation @@ -285,13 +263,13 @@ Uninstall the database system, then clear the residual folders: ``` $ sudo make uninstall $ make clean -$ sudo rm -rf IvorySQL-4.5 +$ sudo rm -rf IvorySQL-5.0 ``` === Uninstallation for deb installation Uninstall the database system, then clear the residual folders: ``` -$ sudo dpkg -P IvorySQL-4.5 -$ sudo rm -rf IvorySQL-4.5 +$ sudo dpkg -P IvorySQL-5.0 +$ sudo rm -rf IvorySQL-5.0 ``` \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/4.2.adoc b/EN/modules/ROOT/pages/master/4.2.adoc index b7358ff..f0782ab 100644 --- a/EN/modules/ROOT/pages/master/4.2.adoc +++ b/EN/modules/ROOT/pages/master/4.2.adoc @@ -8,9 +8,9 @@ This chapter is a demo to show you how to build an IvorySQL cluster. Just take a == Primary node === Installing and start database -For quick database installation by yum, please refer to xref:v4.5/3.adoc#quick-installation[Quick installation]。 +For quick database installation by yum, please refer to xref:v5.0/3.adoc#quick-installation[Quick installation]。 -For more installation options, please refer to xref:v4.5/6.adoc#Installation[Installation]。 +For more installation options, please refer to xref:v5.0/6.adoc#Installation[Installation]。 [NOTE] The master node database needs to be installed and **started**. @@ -55,9 +55,9 @@ $ pg_ctl restart == Standby node === Installing database -For quick database installation by yum, please refer to xref:v4.5/3.adoc#quick-installation[Quick installation]。 +For quick database installation by yum, please refer to xref:v5.0/3.adoc#quick-installation[Quick installation]。 -For more installation options, please refer to xref:v4.5/6.adoc#Installation[Installation]。 +For more installation options, please refer to xref:v5.0/6.adoc#Installation[Installation]。 [NOTE] The standby node database only needs to be installed and **not started**. @@ -71,7 +71,7 @@ $ sudo systemctl stop firewalld === Building streaming replication Run below command on the standby node to take base backups of the primary, that is, to build a streaming replication: ``` -$ sudo pg_basebackup -F p -P -X fetch -R -h -p -U ivorysql -D /usr/local/ivorysql/ivorysql-4/data +$ sudo pg_basebackup -F p -P -X fetch -R -h -p -U ivorysql -D /usr/local/ivorysql/ivorysql-5/data ``` - Specifies the host name of the machine on which the server is running; - Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults is 5432; @@ -84,9 +84,9 @@ For more options, refer to pg_basebackup --help. Add below contents in ~/.bash_profile file: ``` -PATH=/usr/local/ivorysql/ivorysql-4/bin:$PATH +PATH=/usr/local/ivorysql/ivorysql-5/bin:$PATH export PATH -PGDATA=/usr/local/ivorysql/ivorysql-4/data +PGDATA=/usr/local/ivorysql/ivorysql-5/data export PGDATA ``` Source to make it effective: @@ -96,7 +96,7 @@ $ source ~/.bash_profile === Starting IvorySQL sevice ``` -$ sudo pg_ctl -D /usr/local/ivorysql/ivorysql-4/data start +$ sudo pg_ctl -D /usr/local/ivorysql/ivorysql-5/data start ``` == Experience the IvorySQL cluster @@ -117,7 +117,7 @@ ivorysql 6567 6139 0 21:54 ? 00:00:00 postgres: walreceiver streaming On the primary node, connect to IvorySQL and show the status: ``` $ psql -d ivorysql -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# select * from pg_stat_replication; @@ -141,7 +141,7 @@ All writing operations are performed on the primary node, while reading can be o Below is an example. Create a new database test on primary and query: ``` $ psql -d ivorysql -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# create database test; @@ -161,7 +161,7 @@ ivorysql=# \l Query on the standby node: ``` $ psql -d ivorysql -psql (17.5) +psql (18.0) Type "help" for help. ivorysql=# \l diff --git a/EN/modules/ROOT/pages/master/4.3.adoc b/EN/modules/ROOT/pages/master/4.3.adoc index 5ef2e35..4676221 100644 --- a/EN/modules/ROOT/pages/master/4.3.adoc +++ b/EN/modules/ROOT/pages/master/4.3.adoc @@ -2090,10 +2090,6 @@ Named and mixed call notations currently cannot be used when calling an aggregat == Oracle Compatible Features -**Refer to:** - -- [GUC Variables](https://docs.ivorysql.org/en/ivorysql-doc/v4.5/v4.5/15) - === Changing tables #### syntax @@ -2108,10 +2104,10 @@ action: | DROP [ COLUMN ] ( column_name [, ... ] ) add_coldef: - cloumn_name data_type + column_name data_type modify_coldef: - cloumn_name data_type alter_using + column_name data_type alter_using alter_using: USING expression @@ -2120,7 +2116,7 @@ alter_using: #### **parameters** `name` Table name. -`cloumn_name` Column name. +`column_name` Column name. `data_type` Column type. `expression` The value expression. `ADD keyword` Adds a column to the table, either one or more columns. @@ -3582,7 +3578,7 @@ DETAIL: Key (b)=(11) already exists. When appending a new table to a partitioned table with a globally unique index, the system performs a duplicate check on all existing partitions. If a duplicate item is found in an existing partition that matches a tuple in the appended table, an error is raised and the append fails. -Appending requires a sharedlock on all existing partitions. If one of the partitions is doing a concurrent INSERT, the append will wait for it to complete first. This can be improved in a future release +Appending requires a SHARE LOCK on all existing partitions. If one of the partitions is doing a concurrent INSERT, the append will wait for it to complete first. This can be improved in a future release #### Example diff --git a/EN/modules/ROOT/pages/master/4.4.adoc b/EN/modules/ROOT/pages/master/4.4.adoc index 7219a12..f3a6392 100644 --- a/EN/modules/ROOT/pages/master/4.4.adoc +++ b/EN/modules/ROOT/pages/master/4.4.adoc @@ -67,7 +67,7 @@ Or stop the background service by other means. 8.Finally, using the new version of the psql command to restore the data. - /usr/local/pqsql/bin/psql -d postgres -f outputfile + /usr/local/pgsql/bin/psql -d postgres -f outputfile To reduce downtime, you can install the new version of IvorySQL to another directory, while starting the service using a different port. Then perform both the export and import of the database. @@ -77,7 +77,62 @@ When the above operation is executed, the old and new versions of the backend se === Upgrade with the pg_upgrade utility -The pg_upgrade utility supports in-place upgrades of IvorySQL across versions. The upgrade can be performed in minutes, especially when using the --link mode. It requires similar steps as pg_dumpall above, such as starting/stopping the server and running initdb.pg_upgrade https://www.postgresql.org/docs/current/pgupgrade.html[doc] outlines the steps required. +The pg_upgrade tool is a built-in cross-version upgrade utility in PostgreSQL that enables in-place database upgrades without requiring export and import operations. Since IvorySQL is derived from PostgreSQL, it can also use the pg_upgrade tool for major version upgrades. Below is a brief introduction on how to use pg_upgrade to upgrade IvorySQL to the latest 5.0 version on the CentOS8 platform. + +pg_upgrade provides a pre-upgrade compatibility check (using the -c or --check option), which can identify issues such as plugin or data type incompatibilities. If the --link option is specified, the new version service can directly use the existing database files without copying, typically completing the upgrade in just a few minutes. + +Commonly used parameters include: + +* -b bindir, --old-bindir=bindir: The directory of the old IvorySQL executable files; +* -B bindir, --new-bindir=bindir: The directory of the new IvorySQL executable files; +* -d configdir, --old-datadir=configdir: The data directory of the old version; +* -D configdir, --new-datadir=configdir: The data directory of the new version; +* -c, --check: Only check upgrade compatibility without modifying any data; +* -k, --link: Upgrade using hard link method; + +Upgrade preparation: + +First, stop the old version of the IvorySQL 4.6 database: +``` +/usr/ivory-4/bin/pg_ctl -D ./data stop +``` +Then install the new version of the IvorySQL 5.0 database: +``` +dnf install -y ivorysql5-5.0 +``` +Initialize the new IvorySQL 5.0 data directory: +``` +/usr/ivory-5/bin/initdb -D ./data +``` +Check version compatibility: +``` +/usr/ivory-5/bin/pg_upgrade --old-datadir=/home/ivorysql/test/4.6/data --new-datadir=/home/ivorysql/test/5.0/data --old-bindir=/usr/ivory-4/bin/ --new-bindir=/usr/ivory-5/bin/ --check +``` +The appearance of "Clusters are compatible" at the end indicates that there are no compatibility issues between the two versions of data, and the upgrade can proceed. + +Official upgrade: +``` +/usr/ivory-5/bin/pg_upgrade --old-datadir=/home/ivorysql/test/4.6/data --new-datadir=/home/ivorysql/test/5.0/data --old-bindir=/usr/ivory-4/bin/ --new-bindir=/usr/ivory-5/bin/ +``` +Seeing "Upgrade Complete" indicates that the upgrade has been successfully completed. + +Update statistics: + +pg_upgrade creates new system tables and reuses old data for the upgrade. However, statistics are not migrated during the upgrade process. Therefore, before enabling the new version, you should first recollect statistics to avoid incorrect query plans caused by missing statistics. + +Start the new version of the database. +``` +/usr/ivory-5/bin/pg_ctl -D ./data -l logfile start +``` +Manually run the vacuum command +``` +vacuum --all --analyze-in-stage -h 127.0.0.1 -p 1521 +``` +Cleanup after upgrade +``` +rm -rf /home/ivorysql/test/4.6/data +``` +pg_upgrade https://www.postgresql.org/docs/current/pgupgrade.html[Document]outlines the steps required above. === Upgrade data by copying @@ -87,7 +142,7 @@ This upgrade method can be used with built-in logical replication tools and exte == Managing IvorySQL Versions -IvorySQL is based on PostgreSQL and is updated at the same frequency as PostgreSQL, with one major release per year and one minor release per quarter. IvorySQL 4.5 is based on PostgreSQL 17.5, and all versions of IvorySQL are backward compatible.The relevant version features can be viewed by looking at https://www.ivorysql.org/en/releases-page/[Official Website]。 +IvorySQL is based on PostgreSQL and is updated at the same frequency as PostgreSQL, with one major release per year and one minor release per quarter. IvorySQL 5.0 is based on PostgreSQL 18.0, and all versions of IvorySQL are backward compatible.The relevant version features can be viewed by looking at https://www.ivorysql.org/en/releases-page/[Official Website]。 == Managing IvorySQL database access @@ -898,7 +953,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2 ORDER BY t1.fivethous; QUERY PLAN -------------------------------------------------------------------​-------------------------------------------------------------------​------ - Sort (cost=717.34..717.59 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1) + Sort (cost=717.34..718.09 rows=101 width=488) (actual time=7.761..7.774 rows=100 loops=1) Sort Key: t1.fivethous Sort Method: quicksort Memory: 77kB -> Hash Join (cost=230.47..713.98 rows=101 width=488) (actual time=0.711..7.427 rows=100 loops=1) diff --git a/EN/modules/ROOT/pages/master/4.5.adoc b/EN/modules/ROOT/pages/master/4.5.adoc index 0a5ee55..066e32f 100644 --- a/EN/modules/ROOT/pages/master/4.5.adoc +++ b/EN/modules/ROOT/pages/master/4.5.adoc @@ -94,7 +94,8 @@ Set environment variables; Load environment variables; Because ORACLE must be de ``` export ORACLE_HOME=/usr/lib/oracle/18.3/client64 -# tar -zxvf DBD-Oracle-1.76.tar.gz # source /home/postgres/.bashrc +# tar -zxvf DBD-Oracle-1.76.tar.gz +# source /home/postgres/.bashrc # cd DBD-Oracle-1.76 # perl Makefile.PL # make @@ -459,7 +460,7 @@ $ createdb orcl $ psql -psql (17.5) +psql (18.0) Type "help" for help. @@ -488,7 +489,7 @@ Create SH, HR, SCOTT users: ``` $ psql orcl -psql (17.5) +psql (18.0) Type "help" for help. diff --git a/EN/modules/ROOT/pages/master/4.6.1.adoc b/EN/modules/ROOT/pages/master/4.6.1.adoc new file mode 100644 index 0000000..de4757c --- /dev/null +++ b/EN/modules/ROOT/pages/master/4.6.1.adoc @@ -0,0 +1,241 @@ + +:sectnums: +:sectnumlevels: 5 + += Deploying single-node containers and high-availability clusters on k8s + +== Single-node container +On the master node of the k8s cluster, create a namespace named ivorysql. +``` +[root@k8s-master ~]# kubectl create ns ivorysql +``` + +Download the latest docker_library code. +``` +[root@k8s-master ~]# git clone https://github.com/IvorySQL/docker_library.git +``` + +Enter the single-node directory +``` +[root@k8s-master ~]# cd docker_library/k8s-cluster/single +[root@k8s-master single]# vim statefulset.yaml #Update the PVC information and database password in the StatefulSet to match your actual environment. +``` + +Use statefulset.yaml to create a single-node pod. +``` +[root@k8s-master single]# kubectl apply -f statefulset.yaml +service/ivorysql-svc created +statefulset.apps/ivorysql created +``` + +Wait for the single-node pod to be successfully created. +``` +[root@k8s-master single]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-0 0/1 ContainerCreating 0 47s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-svc NodePort 10.108.178.236 5432:32106/TCP,1521:31887/TCP 47s + +NAME READY AGE +statefulset.apps/ivorysql 0/1 47s +[root@k8s-master single]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-0 1/1 Running 0 2m39s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-svc NodePort 10.108.178.236 5432:32106/TCP,1521:31887/TCP 2m39s + +NAME READY AGE +statefulset.apps/ivorysql 1/1 2m39s +``` + +Connect to IvorySQL via its PostgreSQL port using the psql +``` +[root@k8s-master single]# psql -U ivorysql -p 32106 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +ivorysql=# exit +``` + +Connect to IvorySQL's Oracle-compatible port using psql. +``` +[root@k8s-master single]# psql -U ivorysql -p 31887 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) +``` + +Uninstall Single-node container +``` +[root@k8s-master single]# kubectl delete -f statefulset.yaml +``` + +== High Availability Cluster + +Access the master node of the k8s cluster and create a namespace named ivorysql. +``` +[root@k8s-master ~]# kubectl create ns ivorysql +``` + +Download the latest docker_library code. +``` +[root@k8s-master ~]# git clone https://github.com/IvorySQL/docker_library.git +``` + +Enter the high-availability cluster directory. +``` +[root@k8s-master ~]# cd docker_library/k8s-cluster/ha-cluster/helm_charts +[root@k8s-master single]# vim values.yaml #Adjust the PVC settings, cluster size, and other configurations in values.yaml according to your environment. For the database password, check templates/secret.yaml and modify it as needed. +``` + +Deploy the high-availability cluster using https://helm.sh/docs/intro/install/[Helm] commands. +``` +[root@k8s-master helm_charts]# helm install ivorysql-ha-cluster -n ivorysql . +NAME: ivorysql-ha-cluster +LAST DEPLOYED: Wed Sep 10 09:45:36 2025 +NAMESPACE: ivorysql +STATUS: deployed +REVISION: 1 +TEST SUITE: None +[root@k8s-master helm_charts]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-patroni-hac-0 1/1 Running 0 42s +pod/ivorysql-patroni-hac-1 0/1 Running 0 18s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-patroni-hac NodePort 10.96.119.203 5432:32391/TCP,1521:32477/TCP 42s +service/ivorysql-patroni-hac-config ClusterIP None 42s +service/ivorysql-patroni-hac-pods ClusterIP None 42s +service/ivorysql-patroni-hac-repl NodePort 10.100.122.0 5432:30111/TCP,1521:32654/TCP 42s + +NAME READY AGE +statefulset.apps/ivorysql-patroni-hac 1/3 42s +``` + +Wait until all pods are running successfully, indicating the cluster deployment is complete. +``` +[root@k8s-master helm_charts]# kubectl get all -n ivorysql +NAME READY STATUS RESTARTS AGE +pod/ivorysql-patroni-hac-0 1/1 Running 0 88s +pod/ivorysql-patroni-hac-1 1/1 Running 0 64s +pod/ivorysql-patroni-hac-2 1/1 Running 0 41s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/ivorysql-patroni-hac NodePort 10.96.119.203 5432:32391/TCP,1521:32477/TCP 88s +service/ivorysql-patroni-hac-config ClusterIP None 88s +service/ivorysql-patroni-hac-pods ClusterIP None 88s +service/ivorysql-patroni-hac-repl NodePort 10.100.122.0 5432:30111/TCP,1521:32654/TCP 88s +NAME READY AGE +statefulset.apps/ivorysql-patroni-hac 3/3 88s +``` +Connect to the PostgreSQL and Oracle ports of the cluster's primary node using psql. +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 32391 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + f +(1 row) + +ivorysql=# exit +``` +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 32477 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + f +(1 row) + +ivorysql=# +``` + +Use psql to connect to the PostgreSQL and Oracle ports of the cluster's standby node. +``` +[root@k8s-master helm_charts]# psql -U ivorysql -p 30111 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + t +(1 row) + +ivorysql=# exit + +[root@k8s-master helm_charts]# psql -U ivorysql -p 32654 -h 127.0.0.1 -d ivorysql +Password for user ivorysql: + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# SELECT pg_is_in_recovery(); + pg_is_in_recovery +------------------- + t +(1 row) + +ivorysql=# +``` + +Uninstall high-availability cluster +``` +[root@k8s-master helm_charts]# helm uninstall ivorysql-ha-cluster -n ivorysql +``` +Remove PVC +``` +[root@k8s-master helm_charts]# kubectl delete pvc ivyhac-config-ivorysql-patroni-hac-0 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc ivyhac-config-ivorysql-patroni-hac-1 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc ivyhac-config-ivorysql-patroni-hac-2 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc pgdata-ivorysql-patroni-hac-0 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc pgdata-ivorysql-patroni-hac-1 -n ivorysql +[root@k8s-master helm_charts]# kubectl delete pvc pgdata-ivorysql-patroni-hac-2 -n ivorysql +``` \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/4.6.2.adoc b/EN/modules/ROOT/pages/master/4.6.2.adoc new file mode 100644 index 0000000..bcd1adf --- /dev/null +++ b/EN/modules/ROOT/pages/master/4.6.2.adoc @@ -0,0 +1,2988 @@ +:sectnums: +:sectnumlevels: 5 += Deploy IvorySQL with IvorySQL Operator + +== Operator Installation + +. Fork https://github.com/IvorySQL/ivory-operator[ivory-operator repository] and clone it to your host machine: ++ +[source,bash,subs="attributes+"] +---- +YOUR_GITHUB_UN="" +git clone --depth 1 "git@github.com:${YOUR_GITHUB_UN}/ivory-operator.git" +cd ivory-operator +---- + +. Run the following commands: ++ +[source,bash] +---- +kubectl apply -k examples/kustomize/install/namespace +kubectl apply --server-side -k examples/kustomize/install/default +---- + +== Getting Started + +Throughout this tutorial, we will be building on the example provided in the `examples/kustomize/ivory`. + +When referring to a nested object within a YAML manifest, we will be using the `.` format similar to `kubectl explain`. For example, if we want to refer to the deepest element in this yaml file: + +[source,yaml] +---- +spec: + hippos: + appetite: huge +---- + +we would say `spec.hippos.appetite`. + +`kubectl explain` is your friend. You can use `kubectl explain ivorycluster` to introspect the `ivorycluster.ivory-operator.ivorysql.org` custom resource definition. + +== Create an Ivory Cluster + +=== Create + +Creating an Ivory cluster is pretty simple. Using the example in the `examples/kustomize/ivory` directory, all we have to do is run: + +[source,shell] +---- +kubectl apply -k examples/kustomize/ivory +---- + +and IVYO will create a simple Ivory cluster named `hippo` in the `ivory-operator` namespace. You can track the status of your Ivory cluster using `kubectl describe` on the `ivoryclusters.ivory-operator.ivorysql.org` custom resource: + +[source,shell] +---- +kubectl -n ivory-operator describe ivoryclusters.ivory-operator.ivorysql.org hippo +---- + +and you can track the state of the Ivory Pod using the following command: + +[source,shell] +---- +kubectl -n ivory-operator get pods \ + --selector=ivory-operator.ivorysql.org/cluster=hippo,ivory-operator.ivorysql.org/instance +---- + +==== What Just Happened? + +IVYO created an Ivory cluster based on the information provided to it in the Kustomize manifests located in the `examples/kustomize/ivory` directory. Let's better understand what happened by inspecting the `examples/kustomize/ivory/ivory.yaml` file: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +When we ran the `kubectl apply` command earlier, what we did was create a `ivorycluster` custom resource in Kubernetes. IVYO detected that we added a new `ivorycluster` resource and started to create all the objects needed to run Ivory in Kubernetes! + +What else happened? IVYO read the value from `metadata.name` to provide the Ivory cluster with the name `hippo`. Additionally, IVYO knew which containers to use for Ivory and pgBackRest by looking at the values in `spec.image` and `spec.backups.pgbackrest.image` respectively. The value in `spec.postgresVersion` is important as it will help IVYO track which major version of Ivory you are using. + +IVYO knows how many Ivory instances to create through the `spec.instances` section of the manifest. While `name` is optional, we opted to give it the name `instance1`. We could have also created multiple replicas and instances during cluster initialization, but we will cover that more when we discuss how to https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/high-availability.md[scale and create a HA Ivory cluster]. + +A very important piece of your `ivorycluster` custom resource is the `dataVolumeClaimSpec` section. This describes the storage that your Ivory instance will use. It is modeled after the https://kubernetes.io/docs/concepts/storage/persistent-volumes/[Persistent Volume Claim]. If you do not provide a `spec.instances.dataVolumeClaimSpec.storageClassName`, then the default storage class in your Kubernetes environment is used. + +As part of creating an Ivory cluster, we also specify information about our backup archive. IVYO uses https://pgbackrest.org/[pgBackRest], an open source backup and restore tool designed to handle terabyte-scale backups. As part of initializing our cluster, we can specify where we want our backups and archives (https://www.postgresql.org/docs/current/wal-intro.html[write-ahead logs or WAL]) stored. We will talk about this portion of the `ivorycluster` spec in greater depth in the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md[disaster recovery] section of this tutorial, and also see how we can store backups in Amazon S3, Google GCS, and Azure Blob Storage. + +=== Troubleshooting + +==== IvorySQL / pgBackRest Pods Stuck in `Pending` Phase + +The most common occurrence of this is due to PVCs not being bound. Ensure that you have set up your storage options correctly in any `volumeClaimSpec`. You can always update your settings and reapply your changes with `kubectl apply`. + +Also ensure that you have enough persistent volumes available: your Kubernetes administrator may need to provision more. + +If you are on OpenShift, you may need to set `spec.openshift` to `true`. + +=== Next Steps + +We're up and running -- now let's https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/connect-cluster.md[connect to our Ivory cluster]! + +== Connect to an Ivory Cluster + +It's one thing to https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/create-cluster.md[create an Ivory cluster]; it's another thing to connect to it. Let's explore how IVYO makes it possible to connect to an Ivory cluster! + +=== Background: Services, Secrets, and TLS + +IVYO creates a series of Kubernetes https://kubernetes.io/docs/concepts/services-networking/service/[Services] to provide stable endpoints for connecting to your Ivory databases. These endpoints make it easy to provide a consistent way for your application to maintain connectivity to your data. To inspect what services are available, you can run the following command: + +[source,shell] +---- +kubectl -n ivory-operator get svc --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +will yield something similar to: + +[source,shell] +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hippo-ha ClusterIP 10.103.73.92 5432/TCP 3h14m +hippo-ha-config ClusterIP None 3h14m +hippo-pods ClusterIP None 3h14m +hippo-primary ClusterIP None 5432/TCP 3h14m +hippo-replicas ClusterIP 10.98.110.215 5432/TCP 3h14m +---- + +You do not need to worry about most of these Services, as they are used to help manage the overall health of your Ivory cluster. For the purposes of connecting to your database, the Service of interest is called `hippo-primary`. Thanks to IVYO, you do not need to even worry about that, as that information is captured within a Secret! + +When your Ivory cluster is initialized, IVYO will bootstrap a database and Ivory user that your application can access. This information is stored in a Secret named with the pattern `-pguser-`. For our `hippo` cluster, this Secret is called `hippo-pguser-hippo`. This Secret contains the information you need to connect your application to your Ivory database: + +- `user`: The name of the user account. +- `password`: The password for the user account. +- `dbname`: The name of the database that the user has access to by default. +- `host`: The name of the host of the database. + This references the https://kubernetes.io/docs/concepts/services-networking/service/[Service] of the primary Ivory instance. +- `port`: The port that the database is listening on. +- `uri`: A https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING[PostgresSQL connection URI] + that provides all the information for logging into the Ivory database. +- `jdbc-uri`: A https://jdbc.postgresql.org/documentation/use/[PostgresSQL JDBC connection URI] that provides + all the information for logging into the Ivory database via the JDBC driver. + +All connections are over TLS. IVYO provides its own certificate authority (CA) to allow you to securely connect your applications to your Ivory clusters. This allows you to use the https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS[`verify-full` "SSL mode"] of Ivory, which provides eavesdropping protection and prevents MITM attacks. You can also choose to bring your own CA, which is described later in this tutorial in the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md[Customize Cluster] section. + +==== Modifying Service Type, NodePort Value and Metadata + +By default, IVYO deploys Services with the `ClusterIP` Service type. Based on how you want to expose your database, +you may want to modify the Services to use a different +https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types[Service type] +and https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport[NodePort value]. + +You can modify the Services that IVYO manages from the following attributes: + +- `spec.service` - this manages the Service for connecting to an Ivory primary. +- `spec.userInterface.pgAdmin.service` - this manages the Service for connecting to the pgAdmin management tool. + +For example, say you want to set the Ivory primary to use a `NodePort` service, a specific `nodePort` value, and set +a specific annotation and label, you would add the following to your manifest: + +[source,yaml] +---- +spec: + service: + metadata: + annotations: + my-annotation: value1 + labels: + my-label: value2 + type: NodePort + nodePort: 32000 +---- + +For our `hippo` cluster, you would see the Service type and nodePort modification as well as the annotation and label. +For example: + +[source,shell] +---- +kubectl -n ivory-operator get svc --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +will yield something similar to: + +[source,shell] +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hippo-ha NodePort 10.105.57.191 5432:32000/TCP 48s +hippo-ha-config ClusterIP None 48s +hippo-pods ClusterIP None 48s +hippo-primary ClusterIP None 5432/TCP 48s +hippo-replicas ClusterIP 10.106.18.99 5432/TCP 48s +---- + +and the top of the output from running + +[source,shell] +---- +kubectl -n ivory-operator describe svc hippo-ha +---- + +will show our custom annotation and label have been added: + +[source,shell] +---- +Name: hippo-ha +Namespace: ivory-operator +Labels: my-label=value2 + ivory-operator.ivorysql.org/cluster=hippo + ivory-operator.ivorysql.org/patroni=hippo-ha +Annotations: my-annotation: value1 +---- + +Note that setting the `nodePort` value is not allowed when using the (default) `ClusterIP` type, and it must be in-range +and not otherwise in use or the operation will fail. Additionally, be aware that any annotations or labels provided here +will win in case of conflicts with any annotations or labels a user configures elsewhere. + +Finally, if you are exposing your Services externally and are relying on TLS +verification, you will need to use the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[custom TLS] +features of IVYO). + +=== Connect an Application + +For this tutorial, we are going to connect https://www.keycloak.org/[Keycloak], an open source +identity management application. Keycloak can be deployed on Kubernetes and is backed by an Ivory +database. We provide an example of deploying Keycloak andan ivorycluster, the manifest below deploys it using our `hippo` cluster that is already running: + +[source,shell] +---- +kubectl apply --filename=- <}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +Apply these updates to your Ivory cluster with the following command: + +[source,shell] +---- +kubectl apply -k examples/kustomize/ivory +---- + +Within moment, you should see a new Ivory instance initializing! You can see all of your Ivory Pods for the `hippo` cluster by running the following command: + +[source,shell] +---- +kubectl -n ivory-operator get pods \ + --selector=ivory-operator.ivorysql.org/cluster=hippo,ivory-operator.ivorysql.org/instance-set +---- + +Let's test our high availability set up. + +=== Testing Your HA Cluster + +An important part of building a resilient Ivory environment is testing its resiliency, so let's run a few tests to see how IVYO performs under pressure! + +==== Test #1: Remove a Service + +Let's try removing the primary Service that our application is connected to. This test does not actually require a HA Ivory cluster, but it will demonstrate IVYO's ability to react to environmental changes and heal things to ensure your applications can stay up. + +Recall in the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/connect-cluster.md[connecting a Ivory cluster] that we observed the Services that IVYO creates, e.g.: + +[source,shell] +---- +kubectl -n ivory-operator get svc \ + --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +yields something similar to: + +[source,shell] +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hippo-ha ClusterIP 10.103.73.92 5432/TCP 4h8m +hippo-ha-config ClusterIP None 4h8m +hippo-pods ClusterIP None 4h8m +hippo-primary ClusterIP None 5432/TCP 4h8m +hippo-replicas ClusterIP 10.98.110.215 5432/TCP 4h8m +---- + +We also mentioned that the application is connected to the `hippo-primary` Service. What happens if we were to delete this Service? + +[source,shell] +---- +kubectl -n ivory-operator delete svc hippo-primary +---- + +This would seem like it could create a downtime scenario, but run the above selector again: + +[source,shell] +---- +kubectl -n ivory-operator get svc \ + --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +You should see something similar to: + +[source,shell] +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hippo-ha ClusterIP 10.103.73.92 5432/TCP 4h8m +hippo-ha-config ClusterIP None 4h8m +hippo-pods ClusterIP None 4h8m +hippo-primary ClusterIP None 5432/TCP 3s +hippo-replicas ClusterIP 10.98.110.215 5432/TCP 4h8m +---- + +Wow -- IVYO detected that the primary Service was deleted and it recreated it! Based on how your application connects to Ivory, it may not have even noticed that this event took place! + +Now let's try a more extreme downtime event. + +==== Test #2: Remove the Primary StatefulSet + +https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSets] are a Kubernetes object that provide helpful mechanisms for managing Pods that interface with stateful applications, such as databases. They provide a stable mechanism for managing Pods to help ensure data is retrievable in a predictable way. + +What happens if we remove the StatefulSet that is pointed to the Pod that represents the Ivory primary? First, let's determine which Pod is the primary. We'll store it in an environmental variable for convenience. + +[source,shell] +---- +PRIMARY_POD=$(kubectl -n ivory-operator get pods \ + --selector=ivory-operator.ivorysql.org/role=master \ + -o jsonpath='{.items[*].metadata.labels.ivory-operator\.ivorysql\.org/instance}') +---- + +Inspect the environmental variable to see which Pod is the current primary: + +[source,shell] +---- +echo $PRIMARY_POD +---- + +should yield something similar to: + +[source,shell] +---- +hippo-instance1-zj5s +---- + +We can use the value above to delete the StatefulSet associated with the current Ivory primary instance: + +[source,shell] +---- +kubectl delete sts -n ivory-operator "${PRIMARY_POD}" +---- + +Let's see what happens. Try getting all of the StatefulSets for the Ivory instances in the `hippo` cluster: + +[source,shell] +---- +kubectl get sts -n ivory-operator \ + --selector=ivory-operator.ivorysql.org/cluster=hippo,ivory-operator.ivorysql.org/instance +---- + +You should see something similar to: + +[source,shell] +---- +NAME READY AGE +hippo-instance1-6kbw 1/1 15m +hippo-instance1-zj5s 0/1 1s +---- + +IVYO recreated the StatefulSet that was deleted! After this "catastrophic" event, IVYO proceeds to heal the Ivory instance so it can rejoin the cluster. We cover the high availability process in greater depth later in the documentation. + +What about the other instance? We can see that it became the new primary though the following command: + +[source,shell] +---- +kubectl -n ivory-operator get pods \ + --selector=ivory-operator.ivorysql.org/role=master \ + -o jsonpath='{.items[*].metadata.labels.ivory-operator\.ivorysql\.org/instance}' +---- + +which should yield something similar to: + +[source,shell] +---- +hippo-instance1-6kbw +---- + +You can test that the failover successfully occurred in a few ways. You can connect to the example Keycloak application that we https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/connect-cluster.md[deployed in the previous section]. Based on Keycloak's connection retry logic, you may need to wait a moment for it to reconnect, but you will see it connected and resume being able to read and write data. You can also connect to the Ivory instance directly and execute the following command: + +[source,shell] +---- +SELECT NOT pg_catalog.pg_is_in_recovery() is_primary; +---- + +If it returns `true` (or `t`), then the Ivory instance is a primary! + +What if IVYO was down during the downtime event? Failover would still occur: the Ivory HA system works independently of IVYO and can maintain its own uptime. IVYO will still need to assist with some of the healing aspects, but your application will still maintain read/write connectivity to your Ivory cluster! + +=== Synchronous Replication + +IvorySQL supports synchronous replication, which is a replication mode designed to limit the risk of transaction loss. Synchronous replication waits for a transaction to be written to at least one additional server before it considers the transaction to be committed. For more information on synchronous replication, please read about IVYO's https://github.com/CrunchyData/postgres-operator/blob/master/docs/content/architecture/high-availability.md#synchronous-replication-guarding-against-transactions-loss[high availability architecture] + +To add synchronous replication to your Ivory cluster, you can add the following to your spec: + +[source,yaml] +---- +spec: + patroni: + dynamicConfiguration: + synchronous_mode: true +---- + +While PostgreSQL defaults https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT[`synchronous_commit`] to `on`, you may also want to explicitly set it, in which case the above block becomes: + +[source,yaml] +---- +spec: + patroni: + dynamicConfiguration: + synchronous_mode: true + postgresql: + parameters: + synchronous_commit: "on" +---- + +Note that Patroni, which manages many aspects of the cluster's availability, will favor availability over synchronicity. This means that if a synchronous replica goes down, Patroni will allow for asynchronous replication to continue as well as writes to the primary. However, if you want to disable all writing if there are no synchronous replicas available, you would have to enable `synchronous_mode_strict`, i.e.: + +[source,yaml] +---- +spec: + patroni: + dynamicConfiguration: + synchronous_mode: true + synchronous_mode_strict: true +---- + +=== Affinity + +https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[Kubernetes affinity] rules, which include Pod anti-affinity and Node affinity, can help you to define where you want your workloads to reside. Pod anti-affinity is important for high availability: when used correctly, it ensures that your Ivory instances are distributed amongst different Nodes. Node affinity can be used to assign instances to specific Nodes, e.g. to utilize hardware that's optimized for databases. + +==== Understanding Pod Labels + +IVYO sets up several labels for Ivory cluster management that can be used for Pod anti-affinity or affinity rules in general. These include: + +- `ivory-operator.ivorysql.org/cluster`: This is assigned to all managed Pods in a Ivory cluster. The value of this label is the name of your Ivory cluster, in this case: `hippo`. +- `ivory-operator.ivorysql.org/instance-set`: This is assigned to all Ivory instances within a group of `spec.instances`. In the example above, the value of this label is `instance1`. If you do not assign a label, the value is automatically set by IVYO using a `NN` format, e.g. `00`. +- `ivory-operator.ivorysql.org/instance`: This is a unique label assigned to each Ivory instance containing the name of the Ivory instance. + +Let's look at how we can set up affinity rules for our Ivory cluster to help improve high availability. + +==== Pod Anti-affinity + +Kubernetes has two types of Pod anti-affinity: + +- Preferred: With preferred (`preferredDuringSchedulingIgnoredDuringExecution`) Pod anti-affinity, Kubernetes will make a best effort to schedule Pods matching the anti-affinity rules to different Nodes. However, if it is not possible to do so, then Kubernetes may schedule one or more Pods to the same Node. +- Required: With required (`requiredDuringSchedulingIgnoredDuringExecution`) Pod anti-affinity, Kubernetes mandates that each Pod matching the anti-affinity rules **must** be scheduled to different Nodes. However, a Pod may not be scheduled if Kubernetes cannot find a Node that does not contain a Pod matching the rules. + +There is a trade-off with these two types of pod anti-affinity: while "required" anti-affinity will ensure that all the matching Pods are scheduled on different Nodes, if Kubernetes cannot find an available Node, your Ivory instance may not be scheduled. Likewise, while "preferred" anti-affinity will make a best effort to scheduled your Pods on different Nodes, Kubernetes may compromise and schedule more than one Ivory instance of the same cluster on the same Node. + +By understanding these trade-offs, the makeup of your Kubernetes cluster, and your requirements, you can choose the method that makes the most sense for your Ivory deployment. We'll show examples of both methods below! + +===== Using Preferred Pod Anti-Affinity + +First, let's deploy our Ivory cluster with preferred Pod anti-affinity. Note that if you have a single-node Kubernetes cluster, you will not see your Ivory instances deployed to different nodes. However, your Ivory instances _will_ be deployed. + +We can set up our HA Ivory cluster with preferred Pod anti-affinity like so: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/cluster: hippo + ivory-operator.ivorysql.org/instance-set: instance1 + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +Apply those changes in your Kubernetes cluster. + +Let's take a closer look at this section: + +[source,yaml] +---- +affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/cluster: hippo + ivory-operator.ivorysql.org/instance-set: instance1 +---- + +`spec.instances.affinity.podAntiAffinity` follows the standard Kubernetes https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[Pod anti-affinity spec]. The values for the `matchLabels` are derived from what we described in the previous section: `ivory-operator.ivorysql.org/cluster` is set to our cluster name of `hippo`, and `ivory-operator.ivorysql.org/instance-set` is set to the instance set name of `instance1`. We choose a `topologyKey` of `kubernetes.io/hostname`, which is standard in Kubernetes clusters. + +Preferred Pod anti-affinity will perform a best effort to schedule your Ivory Pods to different nodes. Let's see how you can require your Ivory Pods to be scheduled to different nodes. + +===== Using Required Pod Anti-Affinity + +Required Pod anti-affinity forces Kubernetes to scheduled your Ivory Pods to different Nodes. Note that if Kubernetes is unable to schedule all Pods to different Nodes, some of your Ivory instances may become unavailable. + +Using the previous example, let's indicate to Kubernetes that we want to use required Pod anti-affinity for our Ivory clusters: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/cluster: hippo + ivory-operator.ivorysql.org/instance-set: instance1 + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +Apply those changes in your Kubernetes cluster. + +If you are in a single Node Kubernetes clusters, you will notice that not all of your Ivory instance Pods will be scheduled. This is due to the `requiredDuringSchedulingIgnoredDuringExecution` preference. However, if you have enough Nodes available, you will see the Ivory instance Pods scheduled to different Nodes: + +[source,shell] +---- +kubectl get pods -n ivory-operator -o wide \ + --selector=ivory-operator.ivorysql.org/cluster=hippo,ivory-operator.ivorysql.org/instance +---- + +==== Node Affinity + +Node affinity can be used to assign your Ivory instances to Nodes with specific hardware or to guarantee a Ivory instance resides in a specific zone. Node affinity can be set within the `spec.instances.affinity.nodeAffinity` attribute, following the standard Kubernetes https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[node affinity spec]. + +Let's see an example with required Node affinity. Let's say we have a set of Nodes that are reserved for database usage that have a label `workload-role=db`. We can create a Ivory cluster with a required Node affinity rule to scheduled all of the databases to those Nodes using the following configuration: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: workload-role + operator: In + values: + - db + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +=== Pod Topology Spread Constraints + +In addition to affinity and anti-affinity settings, https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/[Kubernetes Pod Topology Spread Constraints] can also help you to define where you want your workloads to reside. However, while PodAffinity allows any number of Pods to be added to a qualifying topology domain, and PodAntiAffinity allows only one Pod to be scheduled into a single topology domain, topology spread constraints allow you to distribute Pods across different topology domains with a finer level of control. + +==== API Field Configuration + +The spread constraint https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods[API fields] can be configured for instance, PgBouncer and pgBackRest repo host pods. The basic configuration is as follows: + +[source,yaml] +---- + topologySpreadConstraints: + - maxSkew: + topologyKey: + whenUnsatisfiable: + labelSelector: +---- + +where "maxSkew" describes the maximum degree to which Pods can be unevenly distributed, "topologyKey" is the key that defines a topology in the Nodes' Labels, "whenUnsatisfiable" specifies what action should be taken when "maxSkew" can't be satisfied, and "labelSelector" is used to find matching Pods. + +==== Example Spread Constraints + +To help illustrate how you might use this with your cluster, we can review examples for configuring spread constraints on our Instance and pgBackRest repo host Pods. For this example, assume we have a three node Kubernetes cluster where the first node is labeled with `my-node-label=one`, the second node is labeled with `my-node-label=two` and the final node is labeled `my-node-label=three`. The label key `my-node-label` will function as our `topologyKey`. Note all three nodes in our examples will be schedulable, so a Pod could live on any of the three Nodes. + +===== Instance Pod Spread Constraints + +To begin, we can set our topology spread constraints on our cluster Instance Pods. Given this configuration + +[source,yaml] +---- + instances: + - name: instance1 + replicas: 5 + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/instance-set: instance1 +---- + +we will expect 5 Instance pods to be created. Each of these Pods will have the standard `ivory-operator.ivorysql.org/instance-set: instance1` Label set, so each Pod will be properly counted when determining the `maxSkew`. Since we have 3 nodes with a `maxSkew` of 1 and we've set `whenUnsatisfiable` to `DoNotSchedule`, we should see 2 Pods on 2 of the nodes and 1 Pod on the remaining Node, thus ensuring our Pods are distributed as evenly as possible. + +===== pgBackRest Repo Pod Spread Constraints + +We can also set topology spread constraints on our cluster's pgBackRest repo host pod. While we normally will only have a single pod per cluster, we could use a more generic label to add a preference that repo host Pods from different clusters are distributed among our Nodes. For example, by setting our `matchLabel` value to `ivory-operator.ivorysql.org/pgbackrest: ""` and our `whenUnsatisfiable` value to `ScheduleAnyway`, we will allow our repo host Pods to be scheduled no matter what Nodes may be available, but attempt to minimize skew as much as possible. + +[source,yaml] +---- + repoHost: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/pgbackrest: "" +---- + +===== Putting it All Together + +Now that each of our Pods has our desired Topology Spread Constraints defined, let's put together a complete cluster definition: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 5 + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/instance-set: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1G + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repoHost: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/pgbackrest: "" + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1G +---- + +You can then apply those changes in your Kubernetes cluster. + +Once your cluster finishes deploying, you can check that your Pods are assigned to the correct Nodes: + +[source,shell] +---- +kubectl get pods -n ivory-operator -o wide --selector=ivory-operator.ivorysql.org/cluster=hippo +---- + +=== Next Steps + +We've now seen how IVYO helps your application stay "always on" with your Ivory database. Now let's explore how IVYO can minimize or eliminate downtime for operations that would normally cause that, such as https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/resize-cluster.md[resizing your Ivory cluster]. + +== Resize an Ivory Cluster + +You did it -- the application is a success! Traffic is booming, so much so that you need to add more resources to your Ivory cluster. However, you're worried that any resize operation may cause downtime and create a poor experience for your end users. + +This is where IVYO comes in: IVYO will help orchestrate rolling out any potentially disruptive changes to your cluster to minimize or eliminate downtime for your application. To do so, we will assume that you have https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/high-availability.md[deployed a high availability Ivory cluster] as described in the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/high-availability.md[previous section]. + +Let's dive in. + +=== Resize Memory and CPU + +Memory and CPU resources are an important component for vertically scaling your Ivory cluster. +Coupled with https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md[tweaks to your Ivory configuration file], +allocating more memory and CPU to your cluster can help it to perform better under load. + +It's important for instances in the same high availability set to have the same resources. +IVYO lets you adjust CPU and memory within the `resources` sections of the `ivoryclusters.ivory-operator.ivorysql.org` custom resource. These include: + +- `spec.instances.resources` section, which sets the resource values for the IvorySQL container, + as well as any init containers in the associated pod and containers created by the `pgDataVolume` and `pgWALVolume` data migration jobs. +- `spec.instances.sidecars.replicaCertCopy.resources` section, which sets the resources for the `replica-cert-copy` sidecar container. +- `spec.backups.pgbackrest.repoHost.resources` section, which sets the resources for the pgBackRest repo host container, + as well as any init containers in the associated pod and containers created by the `pgBackRestVolume` data migration job. +- `spec.backups.pgbackrest.sidecars.pgbackrest.resources` section, which sets the resources for the `pgbackrest` sidecar container. +- `spec.backups.pgbackrest.sidecars.pgbackrestConfig.resources` section, which sets the resources for the `pgbackrest-config` sidecar container. +- `spec.backups.pgbackrest.jobs.resources` section, which sets the resources for any pgBackRest backup job. +- `spec.backups.pgbackrest.restore.resources` section, which sets the resources for manual pgBackRest restore jobs. +- `spec.dataSource.ivorycluster.resources` section, which sets the resources for pgBackRest restore jobs created during the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/clone-cluster.md[cloning] process. + +The layout of these `resources` sections should be familiar: they follow the same pattern as the standard Kubernetes structure for setting https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[container resources]. Note that these settings also allow for the configuration of https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/[QoS classes]. + +For example, using the `spec.instances.resources` section, let's say we want to update our `hippo` Ivory cluster so that each instance has a limit of `2.0` CPUs and `4Gi` of memory. We can make the following changes to the manifest: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +In particular, we added the following to `spec.instances`: + +[source,yaml] +---- +resources: + limits: + cpu: 2.0 + memory: 4Gi +---- + +Apply these updates to your Ivory cluster with the following command: + +[source,shell] +---- +kubectl apply -k examples/kustomize/ivory +---- + +Now, let's watch how the rollout happens: + +[source,shell] +---- +watch "kubectl -n ivory-operator get pods \ + --selector=ivory-operator.ivorysql.org/cluster=hippo,ivory-operator.ivorysql.org/instance \ + -o=jsonpath='{range .items[*]}{.metadata.name}{\"\t\"}{.metadata.labels.ivory-operator\.ivorysql\.org/role}{\"\t\"}{.status.phase}{\"\t\"}{.spec.containers[].resources.limits}{\"\n\"}{end}'" +---- + +Observe how each Pod is terminated one-at-a-time. This is part of a "rolling update". Because updating the resources of a Pod is a destructive action, IVYO first applies the CPU and memory changes to the replicas. IVYO ensures that the changes are successfully applied to a replica instance before moving on to the next replica. + +Once all of the changes are applied, IVYO will perform a "controlled switchover": it will promote a replica to become a primary, and apply the changes to the final Ivory instance. + +By rolling out the changes in this way, IVYO ensures there is minimal to zero disruption to your application: you are able to successfully roll out updates and your users may not even notice! + +=== Resize PVC + +Your application is a success! Your data continues to grow, and it's becoming apparently that you need more disk. That's great: you can resize your PVC directly on your `ivoryclusters.ivory-operator.ivorysql.org` custom resource with minimal to zero downtime. + +PVC resizing, also known as https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[volume expansion], is a function of your storage class: it must support volume resizing. Additionally, PVCs can only be **sized up**: you cannot shrink the size of a PVC. + +You can adjust PVC sizes on all of the managed storage instances in a Ivory instance that are using Kubernetes storage. These include: + +- `spec.instances.dataVolumeClaimSpec.resources.requests.storage`: The Ivory data directory (aka your database). +- `spec.backups.pgbackrest.repos.volume.volumeClaimSpec.resources.requests.storage`: The pgBackRest repository when using "volume" storage + +The above should be familiar: it follows the same pattern as the standard https://kubernetes.io/docs/concepts/storage/persistent-volumes/[Kubernetes PVC] structure. + +For example, let's say we want to update our `hippo` Ivory cluster so that each instance now uses a `10Gi` PVC and our backup repository uses a `20Gi` PVC. We can do so with the following markup: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 10Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 20Gi +---- + +In particular, we added the following to `spec.instances`: + +[source,yaml] +---- +dataVolumeClaimSpec: + resources: + requests: + storage: 10Gi +---- + +and added the following to `spec.backups.pgbackrest.repos.volume`: + +[source,yaml] +---- +volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 20Gi +---- + +Apply these updates to your Ivory cluster with the following command: + +[source,shell] +---- +kubectl apply -k examples/kustomize/ivory +---- + +==== Resize PVCs With StorageClass That Does Not Allow Expansion + +Not all Kubernetes Storage Classes allow for https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[volume expansion]. However, with IVYO, you can still resize your Ivory cluster data volumes even if your storage class does not allow it! + +Let's go back to the previous example: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 20Gi +---- + +First, create a new instance that has the larger volume size. Call this instance `instance2`. The manifest would look like this: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + - name: instance2 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 10Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 20Gi +---- + +Take note of the block that contains `instance2`: + +[source,yaml] +---- +- name: instance2 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 10Gi +---- + +This creates a second set of two Ivory instances, both of which come up as replicas, that have a larger PVC. + +Once this new instance set is available and they are caught to the primary, you can then apply the following manifest: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance2 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 10Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 20Gi +---- + +This will promote one of the instances with the larger PVC to be the new primary and remove the instances with the smaller PVCs! + +This method can also be used to shrink PVCs to use a smaller amount. + +=== Troubleshooting + +==== Ivory Pod Can't Be Scheduled + +There are many reasons why a IvorySQL Pod may not be scheduled: + +- **Resources are unavailable**. Ensure that you have a Kubernetes https://kubernetes.io/docs/concepts/architecture/nodes/[Node] with enough resources to satisfy your memory or CPU Request. +- **PVC cannot be provisioned**. Ensure that you request a PVC size that is available, or that your PVC storage class is set up correctly. + +==== PVCs Do Not Resize + +Ensure that your storage class supports PVC resizing. You can check that by inspecting the `allowVolumeExpansion` attribute: + +[source,shell] +---- +kubectl get sc +---- + +If the storage class does not support PVC resizing, you can use the technique described above to resize PVCs using a second instance set. + +=== Next Steps + +You've now resized your Ivory cluster, but how can you configure Ivory to take advantage of the new resources? Let's look at how we can https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md[customize the Ivory cluster configuration]. + +== Custom Ivory Configuration + +Part of the trick of managing multiple instances in an Ivory cluster is ensuring all of the configuration +changes are propagated to each of them. This is where IVYO helps: when you make an Ivory configuration +change for a cluster, IVYO will apply it to all of the Ivory instances. + +For example, in our previous step we added CPU and memory limits of `2.0` and `4Gi` respectively. Let's tweak some of the Ivory settings to better use our new resources. We can do this in the `spec.patroni.dynamicConfiguration` section. Here is an example updated manifest that tweaks several settings: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - name: instance1 + replicas: 2 + resources: + limits: + cpu: 2.0 + memory: 4Gi + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + patroni: + dynamicConfiguration: + postgresql: + parameters: + max_parallel_workers: 2 + max_worker_processes: 2 + shared_buffers: 1GB + work_mem: 2MB +---- + +In particular, we added the following to `spec`: + +[source,yaml] +---- +patroni: + dynamicConfiguration: + postgresql: + parameters: + max_parallel_workers: 2 + max_worker_processes: 2 + shared_buffers: 1GB + work_mem: 2MB +---- + +Apply these updates to your Ivory cluster with the following command: + +[source,shell] +---- +kubectl apply -k examples/kustomize/ivory +---- + +IVYO will go and apply these settings, restarting each Ivory instance when necessary. You can verify that the changes are present using the Ivory `SHOW` command, e.g. + +[source,shell] +---- +SHOW work_mem; +---- + +should yield something similar to: + +[source,shell] +---- + work_mem +---------- + 2MB +---- + +=== Customize TLS + +All connections in IVYO use TLS to encrypt communication between components. IVYO sets up a PKI and certificate authority (CA) that allow you create verifiable endpoints. However, you may want to bring a different TLS infrastructure based upon your organizational requirements. The good news: IVYO lets you do this! + +==== How to Customize TLS + +There are a few different TLS endpoints that can be customized for IVYO, including those of the Ivory cluster and controlling how Ivory instances authenticate with each other. Let's look at how we can customize TLS by defining + +* a `spec.customTLSSecret`, used to both identify the cluster and encrypt communications; and +* a `spec.customReplicationTLSSecret`, used for replication authentication. + +To customize the TLS for an Ivory cluster, you will need to create two Secrets in the Namespace of your Ivory cluster. One of these Secrets will be the `customTLSSecret` and the other will be the `customReplicationTLSSecret`. Both secrets contain a TLS key (`tls.key`), TLS certificate (`tls.crt`) and CA certificate (`ca.crt`) to use. + +NOTE: If `spec.customTLSSecret` is provided you **must** also provide `spec.customReplicationTLSSecret` and both must contain the same `ca.crt`. + +The custom TLS and custom replication TLS Secrets should contain the following fields (though see below for a workaround if you cannot control the field names of the Secret's `data`): + +[source,yaml] +---- +data: + ca.crt: + tls.crt: + tls.key: +---- + +For example, if you have files named `ca.crt`, `hippo.key`, and `hippo.crt` stored on your local machine, you could run the following command to create a Secret from those files: + +[source,shell] +---- +kubectl create secret generic -n ivory-operator hippo-cluster.tls \ + --from-file=ca.crt=ca.crt \ + --from-file=tls.key=hippo.key \ + --from-file=tls.crt=hippo.crt +---- + +After you create the Secrets, you can specify the custom TLS Secret in your `ivorycluster.ivory-operator.ivorysql.org` custom resource. For example, if you created a `hippo-cluster.tls` Secret and a `hippo-replication.tls` Secret, you would add them to your Ivory cluster: + +[source,yaml] +---- +spec: + customTLSSecret: + name: hippo-cluster.tls + customReplicationTLSSecret: + name: hippo-replication.tls +---- + +If you're unable to control the key-value pairs in the Secret, you can create a mapping to tell +the Ivory Operator what key holds the expected value. That would look similar to this: + +[source,yaml] +---- +spec: + customTLSSecret: + name: hippo.tls + items: + - key: + path: tls.crt + - key: + path: tls.key + - key: + path: ca.crt +---- + +For instance, if the `hippo.tls` Secret had the `tls.crt` in a key named `hippo-tls.crt`, the +`tls.key` in a key named `hippo-tls.key`, and the `ca.crt` in a key named `hippo-ca.crt`, +then your mapping would look like: + +[source,yaml] +---- +spec: + customTLSSecret: + name: hippo.tls + items: + - key: hippo-tls.crt + path: tls.crt + - key: hippo-tls.key + path: tls.key + - key: hippo-ca.crt + path: ca.crt +---- + +NOTE: Although the custom TLS and custom replication TLS Secrets share the same `ca.crt`, they do not share the same `tls.crt`: + +* Your `spec.customTLSSecret` TLS certificate should have a Common Name (CN) setting that matches the primary Service name. This is the name of the cluster suffixed with `-primary`. For example, for our `hippo` cluster this would be `hippo-primary`. +* Your `spec.customReplicationTLSSecret` TLS certificate should have a Common Name (CN) setting that matches `_ivoryrepl`, which is the preset replication user. + +As with the other changes, you can roll out the TLS customizations with `kubectl apply`. + +=== Labels + +There are several ways to add your own custom Kubernetes https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels] to your Ivory cluster. + +- Cluster: You can apply labels to any IVYO managed object in a cluster by editing the `spec.metadata.labels` section of the custom resource. +- Ivory: You can apply labels to an Ivory instance set and its objects by editing `spec.instances.metadata.labels`. +- pgBackRest: You can apply labels to pgBackRest and its objects by editing `ivoryclusters.spec.backups.pgbackrest.metadata.labels`. + +=== Annotations + +There are several ways to add your own custom Kubernetes https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[Annotations] to your Ivory cluster. + +- Cluster: You can apply annotations to any IVYO managed object in a cluster by editing the `spec.metadata.annotations` section of the custom resource. +- Ivory: You can apply annotations to an Ivory instance set and its objects by editing `spec.instances.metadata.annotations`. +- pgBackRest: You can apply annotations to pgBackRest and its objects by editing `spec.backups.pgbackrest.metadata.annotations`. + +=== Pod Priority Classes + +IVYO allows you to use https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/[pod priority classes] to indicate the relative importance of a pod by setting a `priorityClassName` field on your Ivory cluster. This can be done as follows: + +- Instances: Priority is defined per instance set and is applied to all Pods in that instance set by editing the `spec.instances.priorityClassName` section of the custom resource. +- Dedicated Repo Host: Priority defined under the repoHost section of the spec is applied to the dedicated repo host by editing the `spec.backups.pgbackrest.repoHost.priorityClassName` section of the custom resource. +- Backup (manual and scheduled): Priority is defined under the `spec.backups.pgbackrest.jobs.priorityClassName` section and applies that priority to all pgBackRest backup Jobs (manual and scheduled). +- Restore (data source or in-place): Priority is defined for either a "data source" restore or an in-place restore by editing the `spec.dataSource.ivorycluster.priorityClassName` section of the custom resource. +- Data Migration: The priority defined for the first instance set in the spec (array position 0) is used for the PGDATA and WAL migration Jobs. The pgBackRest repo migration Job will use the priority class applied to the repoHost. + +=== Separate WAL PVCs + +IvorySQL commits transactions by storing changes in its https://www.postgresql.org/docs/current/wal-intro.html[Write-Ahead Log (WAL)]. Because the way WAL files are accessed and +utilized often differs from that of data files, and in high-performance situations, it can desirable to put WAL files on separate storage volume. With IVYO, this can be done by adding +the `walVolumeClaimSpec` block to your desired instance in your ivorycluster spec, either when your cluster is created or anytime thereafter: + +[source,yaml] +---- +spec: + instances: + - name: instance + walVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +This volume can be removed later by removing the `walVolumeClaimSpec` section from the instance. Note that when changing the WAL directory, care is taken so as not to lose any WAL files. IVYO only +deletes the PVC once there are no longer any WAL files on the previously configured volume. + +=== Database Initialization SQL + +IVYO can run SQL for you as part of the cluster creation and initialization process. IVYO runs the SQL using the psql client so you can use meta-commands to connect to different databases, change error handling, or set and use variables. Its capabilities are described in the https://www.postgresql.org/docs/current/app-psql.html[psql documentation]. + +==== Initialization SQL ConfigMap + +The Ivory cluster spec accepts a reference to a ConfigMap containing your init SQL file. Update your cluster spec to include the ConfigMap name, `spec.databaseInitSQL.name`, and the data key, `spec.databaseInitSQL.key`, for your SQL file. For example, if you create your ConfigMap with the following command: + +[source,shell] +---- +kubectl -n ivory-operator create configmap hippo-init-sql --from-file=init.sql=/path/to/init.sql +---- + +You would add the following section to your ivorycluster spec: + +[source,yaml] +---- +spec: + databaseInitSQL: + key: init.sql + name: hippo-init-sql +---- + +NOTE: The ConfigMap must exist in the same namespace as your Ivory cluster. + +After you add the ConfigMap reference to your spec, apply the change with `kubectl apply -k examples/kustomize/ivory`. IVYO will create your `hippo` cluster and run your initialization SQL once the cluster has started. You can verify that your SQL has been run by checking the `databaseInitSQL` status on your Ivory cluster. While the status is set, your init SQL will not be run again. You can check cluster status with the `kubectl describe` command: + +[source,shell] +---- +kubectl -n ivory-operator describe ivoryclusters.ivory-operator.ivorysql.org hippo +---- + +WARNING: In some cases, due to how Kubernetes treats ivorycluster status, IVYO may run your SQL commands more than once. Please ensure that the commands defined in your init SQL are idempotent. + +Now that `databaseInitSQL` is defined in your cluster status, verify database objects have been created as expected. After verifying, we recommend removing the `spec.databaseInitSQL` field from your spec. Removing the field from the spec will also remove `databaseInitSQL` from the cluster status. + +==== PSQL Usage +IVYO uses the psql interactive terminal to execute SQL statements in your database. Statements are passed in using standard input and the filename flag (e.g. `psql -f -`). + +SQL statements are executed as superuser in the default maintenance database. This means you have full control to create database objects, extensions, or run any SQL statements that you might need. + +===== Integration with User and Database Management + +If you are creating users or databases, please see the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/user-management.md[User/Database Management] documentation. Databases created through the user management section of the spec can be referenced in your initialization sql. For example, if a database `zoo` is defined: + +[source,yaml] +---- +spec: + users: + - name: hippo + databases: + - "zoo" +---- + +You can connect to `zoo` by adding the following `psql` meta-command to your SQL: + +[source,sql] +---- +\c zoo +create table t_zoo as select s, md5(random()::text) from generate_Series(1,5) s; +---- + +===== Transaction support + +By default, `psql` commits each SQL command as it completes. To combine multiple commands into a single https://www.postgresql.org/docs/current/tutorial-transactions.html[transaction], use the https://www.postgresql.org/docs/current/sql-begin.html[`BEGIN`] and https://www.postgresql.org/docs/current/sql-commit.html[`COMMIT`] commands. + +[source,sql] +---- +BEGIN; +create table t_random as select s, md5(random()::text) from generate_Series(1,5) s; +COMMIT; +---- + +===== PSQL Exit Code and Database Init SQL Status + +The exit code from `psql` will determine when the `databaseInitSQL` status is set. When `psql` returns `0` the status will be set and SQL will not be run again. When `psql` returns with an error exit code the status will not be set. IVYO will continue attempting to execute the SQL as part of its reconcile loop until `psql` returns normally. If `psql` exits with a failure, you will need to edit the file in your ConfigMap to ensure your SQL statements will lead to a successful `psql` return. The easiest way to make live changes to your ConfigMap is to use the following `kubectl edit` command: + +[source,shell] +---- +kubectl -n edit configmap hippo-init-sql +---- + +Be sure to transfer any changes back over to your local file. Another option is to make changes in your local file and use `kubectl --dry-run` to create a template and pipe the output into `kubectl apply`: + +[source,shell] +---- +kubectl create configmap hippo-init-sql --from-file=init.sql=/path/to/init.sql --dry-run=client -o yaml | kubectl apply -f - +---- + +TIP: If you edit your ConfigMap and your changes aren't showing up, you may be waiting for IVYO to reconcile your cluster. After some time, IVYO will automatically reconcile the cluster or you can trigger reconciliation by applying any change to your cluster (e.g. with `kubectl apply -k examples/kustomize/ivory`). + +To ensure that `psql` returns a failure exit code when your SQL commands fail, set the `ON_ERROR_STOP` https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES[variable] as part of your SQL file: + +[source,sql] +---- +\set ON_ERROR_STOP +\echo Any error will lead to exit code 3 +create table t_random as select s, md5(random()::text) from generate_Series(1,5) s; +---- + +=== Troubleshooting + +==== Changes Not Applied + +If your Ivory configuration settings are not present, ensure that you are using the syntax that Ivory expects. +You can see this in the https://www.postgresql.org/docs/current/runtime-config.html[Ivory configuration documentation]. + +=== Next Steps + +You've now seen how you can further customize your Ivory cluster, but what about https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/user-management.md[managing users and databases]? That's a great question that is answered in the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/user-management.md[next section]. + +== User / Database Management +IVYO comes with some out-of-the-box conveniences for managing users and databases in your Ivory cluster. However, you may have requirements where you need to create additional users, adjust user privileges or add additional databases to your cluster. + +For detailed information for how user and database management works in IVYO, please see the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/user-management.md[User Management] section of the architecture guide. + +=== Creating a New User + +You can create a new user with the following snippet in the `ivorycluster` custom resource. Let's add this to our `hippo` database: + +[source,yaml] +---- +spec: + users: + - name: rhino +---- + +You can now apply the changes and see that the new user is created. Note the following: + +- The user would only be able to connect to the default `ivory` database. +- The user will not have any connection credentials populated into the `hippo-pguser-rhino` Secret. +- The user is unprivileged. + +Let's create a new database named `zoo` that we will let the `rhino` user access: + +[source,yaml] +---- +spec: + users: + - name: rhino + databases: + - zoo +---- + +Inspect the `hippo-pguser-rhino` Secret. You should now see that the `dbname` and `uri` fields are now populated! + +We can set role privileges by using the standard https://www.postgresql.org/docs/current/role-attributes.html[role attributes] that Ivory provides and adding them to the `spec.users.options`. Let's say we want the rhino to become a superuser (be careful about doling out Ivory superuser privileges!). You can add the following to the spec: + +[source,yaml] +---- +spec: + users: + - name: rhino + databases: + - zoo + options: "SUPERUSER" +---- + +There you have it: we have created a Ivory user named `rhino` with superuser privileges that has access to the `rhino` database (though a superuser has access to all databases!). + +=== Adjusting Privileges + +Let's say you want to revoke the superuser privilege from `rhino`. You can do so with the following: + +[source,yaml] +---- +spec: + users: + - name: rhino + databases: + - zoo + options: "NOSUPERUSER" +---- + +If you want to add multiple privileges, you can add each privilege with a space between them in `options`, e.g.: + +[source,yaml] +---- +spec: + users: + - name: rhino + databases: + - zoo + options: "CREATEDB CREATEROLE" +---- + +=== Managing the `ivory` User + +By default, IVYO does not give you access to the `ivory` user. However, you can get access to this account by doing the following: + +[source,yaml] +---- +spec: + users: + - name: ivory +---- + +This will create a Secret of the pattern `-pguser-ivory` that contains the credentials of the `ivory` account. For our `hippo` cluster, this would be `hippo-pguser-ivory`. + +=== Deleting a User + +IVYO does not delete users automatically: after you remove the user from the spec, it will still exist in your cluster. To remove a user and all of its objects, as a superuser you will need to run https://www.postgresql.org/docs/current/sql-drop-owned.html[`DROP OWNED`] in each database the user has objects in, and https://www.postgresql.org/docs/current/sql-droprole.html[`DROP ROLE`] +in your Ivory cluster. + +For example, with the above `rhino` user, you would run the following: + +[source,sql] +---- +DROP OWNED BY rhino; +DROP ROLE rhino; +---- + +Note that you may need to run `DROP OWNED BY rhino CASCADE;` based upon your object ownership structure -- be very careful with this command! + +=== Deleting a Database + +IVYO does not delete databases automatically: after you remove all instances of the database from the spec, it will still exist in your cluster. To completely remove the database, you must run the https://www.postgresql.org/docs/current/sql-dropdatabase.html[`DROP DATABASE`] +command as a Ivory superuser. + +For example, to remove the `zoo` database, you would execute the following: + +[source,sql] +---- +DROP DATABASE zoo; +---- + +=== Next Steps + +Let's look at how IVYO handles https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/disaster-recovery.md[disaster recovery]! + +== Disaster Recovery and Cloning +Perhaps someone accidentally dropped the `users` table. Perhaps you want to clone your production database to a step-down environment. Perhaps you want to exercise your disaster recovery system (and it is important that you do!). + +Regardless of scenario, it's important to know how you can perform a "restore" operation with IVYO to be able to recovery your data from a particular point in time, or clone a database for other purposes. + +Let's look at how we can perform different types of restore operations. First, let's understand the core restore properties on the custom resource. + +=== Restore Properties + +[NOTE] +==== +IVYO offers the ability to restore from an existing ivorycluster or a remote +cloud-based data source, such as S3, GCS, etc. For more on that, see the <> section. + +Note that you **cannot** use both a local ivorycluster data source and a remote cloud-based data +source at one time; if both the `dataSource.ivorycluster` and `dataSource.pgbackrest` fields +are filled in, the local ivorycluster data source will take precedence. +==== + +There are several attributes on the custom resource that are important to understand as part of the restore process. All of these attributes are grouped together in the spec.dataSource.ivorycluster section of the custom resource. + +Please review the table below to understand how each of these attributes work in the context of setting up a restore operation. + +- `spec.dataSource.ivorycluster.clusterName`: The name of the cluster that you are restoring from. This corresponds to the `metadata.name` attribute on a different `ivorycluster` custom resource. +- `spec.dataSource.ivorycluster.clusterNamespace`: The namespace of the cluster that you are restoring from. Used when the cluster exists in a different namespace. +- `spec.dataSource.ivorycluster.repoName`: The name of the pgBackRest repository from the `spec.dataSource.ivorycluster.clusterName` to use for the restore. Can be one of `repo1`, `repo2`, `repo3`, or `repo4`. The repository must exist in the other cluster. +- `spec.dataSource.ivorycluster.options`: Any additional https://pgbackrest.org/command.html#command-restore[pgBackRest restore options] or general options that IVYO allows. For example, you may want to set `--process-max` to help improve performance on larger databases; but you will not be able to set`--target-action`, since that option is currently disallowed. (IVYO always sets it to `promote` if a `--target` is present, and otherwise leaves it blank.) +- `spec.dataSource.ivorycluster.resources`: Setting https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[resource limits and requests] of the restore job can ensure that it runs efficiently. +- `spec.dataSource.ivorycluster.affinity`: Custom https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[Kubernetes affinity] rules constrain the restore job so that it only runs on certain nodes. +- `spec.dataSource.ivorycluster.tolerations`: Custom https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[Kubernetes tolerations] allow the restore job to run on https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[tainted] nodes. + +Let's walk through some examples for how we can clone and restore our databases. + +=== Clone a Ivory Cluster + +Let's create a clone of our https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/create-cluster.md[`hippo`] cluster that we created previously. We know that our cluster is named `hippo` (based on its `metadata.name`) and that we only have a single backup repository called `repo1`. + +Let's call our new cluster `elephant`. We can create a clone of the `hippo` cluster using a manifest like this: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: elephant +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +Note this section of the spec: + +[source,yaml] +---- +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 +---- + +This is the part that tells IVYO to create the `elephant` cluster as an independent copy of the `hippo` cluster. + +The above is all you need to do to clone a Ivory cluster! IVYO will work on creating a copy of your data on a new persistent volume claim (PVC) and work on initializing your cluster to spec. Easy! + +=== Perform a Point-in-time-Recovery (PITR) + +Did someone drop the user table? You may want to perform a point-in-time-recovery (PITR) +to revert your database back to a state before a change occurred. Fortunately, IVYO can help you do that. + +You can set up a PITR using the https://pgbackrest.org/command.html#command-restore[restore] +command of https://www.pgbackrest.org[pgBackRest], the backup management tool that powers +the disaster recovery capabilities of IVYO. You will need to set a few options on +`spec.dataSource.ivorycluster.options` to perform a PITR. These options include: + +- `--type=time`: This tells pgBackRest to perform a PITR. +- `--target`: Where to perform the PITR to. An example recovery target is `2021-06-09 14:15:11-04`. + The timezone specified here as -04 for EDT. Please see the https://pgbackrest.org/user-guide.html#pitr[pgBackRest documentation for other timezone options]. +- `--set` (optional): Choose which backup to start the PITR from. + +A few quick notes before we begin: + +- To perform a PITR, you must have a backup that finished before your PITR time. + In other words, you can't perform a PITR back to a time where you do not have a backup! +- All relevant WAL files must be successfully pushed for the restore to complete correctly. +- Be sure to select the correct repository name containing the desired backup! + +With that in mind, let's use the `elephant` example above. Let's say we want to perform a point-in-time-recovery (PITR) to `2021-06-09 14:15:11-04`, we can use the following manifest: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: elephant +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 + options: + - --type=time + - --target="2021-06-09 14:15:11-04" + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +The section to pay attention to is this: + +[source,yaml] +---- +spec: + dataSource: + ivoryCluster: + clusterName: hippo + repoName: repo1 + options: + - --type=time + - --target="2021-06-09 14:15:11-04" +---- + +Notice how we put in the options to specify where to make the PITR. + +Using the above manifest, IVYO will go ahead and create a new Ivory cluster that recovers +its data up until `2021-06-09 14:15:11-04`. At that point, the cluster is promoted and +you can start accessing your database from that specific point in time! + +=== Perform an In-Place Point-in-time-Recovery (PITR) + +Similar to the PITR restore described above, you may want to perform a similar reversion +back to a state before a change occurred, but without creating another IvorySQL cluster. +Fortunately, IVYO can help you do this as well. + +You can set up a PITR using the https://pgbackrest.org/command.html#command-restore[restore] +command of https://www.pgbackrest.org[pgBackRest], the backup management tool that powers +the disaster recovery capabilities of IVYO. You will need to set a few options on +`spec.backups.pgbackrest.restore.options` to perform a PITR. These options include: + +- `--type=time`: This tells pgBackRest to perform a PITR. +- `--target`: Where to perform the PITR to. An example recovery target is `2021-06-09 14:15:11-04`. +- `--set` (optional): Choose which backup to start the PITR from. + +A few quick notes before we begin: + +- To perform a PITR, you must have a backup that finished before your PITR time. + In other words, you can't perform a PITR back to a time where you do not have a backup! +- All relevant WAL files must be successfully pushed for the restore to complete correctly. +- Be sure to select the correct repository name containing the desired backup! + +To perform an in-place restore, users will first fill out the restore section of the spec as follows: + +[source,yaml] +---- +spec: + backups: + pgbackrest: + restore: + enabled: true + repoName: repo1 + options: + - --type=time + - --target="2021-06-09 14:15:11-04" +---- + +And to trigger the restore, you will then annotate the ivorycluster as follows: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo --overwrite \ + ivory-operator.ivorysql.org/pgbackrest-restore=id1 +---- + +And once the restore is complete, in-place restores can be disabled: + +[source,yaml] +---- +spec: + backups: + pgbackrest: + restore: + enabled: false +---- + +Notice how we put in the options to specify where to make the PITR. + +Using the above manifest, IVYO will go ahead and re-create your Ivory cluster to recover +its data up until `2021-06-09 14:15:11-04`. At that point, the cluster is promoted and +you can start accessing your database from that specific point in time! + +=== Restore Individual Databases + +You might need to restore specific databases from a cluster backup, for performance reasons +or to move selected databases to a machine that does not have enough space to restore the +entire cluster backup. + +[WARNING] +==== +pgBackRest supports this case, but it is important to make sure this is what you want. +Restoring in this manner will restore the requested database from backup and make it +accessible, but all of the other databases in the backup will NOT be accessible after restore. + +For example, if your backup includes databases `test1`, `test2`, and `test3`, and you request that +`test2` be restored, the `test1` and `test3` databases will NOT be accessible after restore is completed. +Please review the pgBackRest documentation on the +https://pgbackrest.org/user-guide.html#restore/option-db-include[limitations on restoring individual databases]. +==== + +You can restore individual databases from a backup using a spec similar to the following: + +[source,yaml] +---- +spec: + backups: + pgbackrest: + restore: + enabled: true + repoName: repo1 + options: + - --db-include=hippo +---- + +where `--db-include=hippo` would restore only the contents of the `hippo` database. + + +=== Standby Cluster + +Advanced high-availability and disaster recovery strategies involve spreading your database clusters +across data centers to help maximize uptime. IVYO provides ways to deploy ivoryclusters that can +span multiple Kubernetes clusters using an external storage system or IvorySQL streaming replication. +The https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/disaster-recovery.md[disaster recovery architecture] documentation +provides a high-level overview of standby clusters with IVYO can be found in the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/disaster-recovery.md[disaster recovery +architecture] documentation. + +==== Creating a standby Cluster + +This tutorial section will describe how to create three different types of standby clusters, one +using an external storage system, one that is streaming data directly from the primary, and one that +takes advantage of both external storage and streaming. These example clusters can be created in the +same Kubernetes cluster, using a single IVYO instance, or spread across different Kubernetes clusters +and IVYO instances with the correct storage and networking configurations. + +===== Repo-based Standby + +A repo-based standby will recover from WAL files a pgBackRest repo stored in external storage. The +primary cluster should be created with a cloud-based https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md[backup configuration]. The following manifest defines a ivorycluster with `standby.enabled` set to true and `repoName` +configured to point to the `s3` repo configured in the primary: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo-standby +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + standby: + enabled: true + repoName: repo1 +---- + +===== Streaming Standby + +A streaming standby relies on an authenticated connection to the primary over the network. The primary +cluster should be accessible via the network and allow TLS authentication (TLS is enabled by default). +In the following manifest, we have `standby.enabled` set to `true` and have provided both the `host` +and `port` that point to the primary cluster. We have also defined `customTLSSecret` and +`customReplicationTLSSecret` to provide certs that allow the standby to authenticate to the primary. +For this type of standby, you must use https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[custom TLS]: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo-standby +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + customTLSSecret: + name: cluster-cert + customReplicationTLSSecret: + name: replication-cert + standby: + enabled: true + host: "192.0.2.2" + port: 5432 +---- + +===== Streaming Standby with an External Repo + +Another option is to create a standby cluster using an external pgBackRest repo that streams from the +primary. With this setup, the standby cluster will continue recovering from the pgBackRest repo if +streaming replication falls behind. In this manifest, we have enabled the settings from both previous +examples: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo-standby +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + customTLSSecret: + name: cluster-cert + customReplicationTLSSecret: + name: replication-cert + standby: + enabled: true + repoName: repo1 + host: "192.0.2.2" + port: 5432 +---- + +=== Promoting a Standby Cluster + +At some point, you will want to promote the standby to start accepting both reads and writes. +This has the net effect of pushing WAL (transaction archives) to the pgBackRest repository, so we +need to ensure we don't accidentally create a split-brain scenario. Split-brain can happen if two +primary instances attempt to write to the same repository. If the primary cluster is still active, +make sure you https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/administrative-tasks.md#shutdown[shutdown] the primary +before trying to promote the standby cluster. + +Once the primary is inactive, we can promote the standby cluster by removing or disabling its +`spec.standby` section: + +[source,yaml] +---- +spec: + standby: + enabled: false +---- + +This change triggers the promotion of the standby leader to a primary IvorySQL +instance and the cluster begins accepting writes. + +=== Clone From Backups Stored in S3 / GCS / Azure Blob Storage [[cloud-based-data-source]] + +You can clone a Ivory cluster from backups that are stored in AWS S3 (or a storage system +that uses the S3 protocol), GCS, or Azure Blob Storage without needing an active Ivory cluster! +The method to do so is similar to how you clone from an existing ivorycluster. This is useful +if you want to have a data set for people to use but keep it compressed on cheaper storage. + +For the purposes of this example, let's say that you created a Ivory cluster named `hippo` that +has its backups stored in S3 that looks similar to this: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: hippo +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/hippo/repo1 + manual: + repoName: repo1 + options: + - --type=full + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" +---- + +Ensure that the credentials in `ivyo-s3-creds` match your S3 credentials. For more details on +https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md#using-s3[deploying a Ivory cluster using S3 for backups], +please see the https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/backups.md#using-s3[Backups] section of the tutorial. + +For optimal performance when creating a new cluster from an active cluster, ensure that you take a +recent full backup of the previous cluster. The above manifest is set up to take a full backup. +Assuming `hippo` is created in the `ivory-operator` namespace, you can trigger a full backup +with the following command: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo --overwrite \ + ivory-operator.ivorysql.org/pgbackrest-backup="$( date '+%F_%H:%M:%S' )" +---- + +Wait for the backup to complete. Once this is done, you can delete the Ivory cluster. + +Now, let's clone the data from the `hippo` backup into a new cluster called `elephant`. You can use a manifest similar to this: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: elephant +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + dataSource: + pgbackrest: + stanza: db + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/hippo/repo1 + repo: + name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/elephant/repo1 + repos: + - name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" +---- + +There are a few things to note in this manifest. First, note that the `spec.dataSource.pgbackrest` +object in our new ivorycluster is very similar but slightly different from the old +ivorycluster's `spec.backups.pgbackrest` object. The key differences are: + +1. No image is necessary when restoring from a cloud-based data source +2. `stanza` is a required field when restoring from a cloud-based data source +3. `backups.pgbackrest` has a `repos` field, which is an array +4. `dataSource.pgbackrest` has a `repo` field, which is a single object + +Note also the similarities: + +1. We are reusing the secret for both (because the new restore pod needs to have the same credentials as the original backup pod) +2. The `repo` object is the same +3. The `global` object is the same + +This is because the new restore pod for the `elephant` ivorycluster will need to reuse the +configuration and credentials that were originally used in setting up the `hippo` ivorycluster. + +In this example, we are creating a new cluster which is also backing up to the same S3 bucket; +only the `spec.backups.pgbackrest.global` field has changed to point to a different path. This +will ensure that the new `elephant` cluster will be pre-populated with the data from `hippo`'s +backups, but will backup to its own folders, ensuring that the original backup repository is +appropriately preserved. + +Deploy this manifest to create the `elephant` Ivory cluster. Observe that it comes up and running: + +[source,shell] +---- +kubectl -n ivory-operator describe ivorycluster elephant +---- + +When it is ready, you will see that the number of expected instances matches the number of ready +instances, e.g.: + +[source,shell] +---- +Instances: + Name: 00 + Ready Replicas: 1 + Replicas: 1 + Updated Replicas: 1 +---- + +The previous example shows how to use an existing S3 repository to pre-populate a ivorycluster +while using a new S3 repository for backing up. But ivoryclusters that use cloud-based data +sources can also use local repositories. + +For example, assuming a ivorycluster called `rhino` that was meant to pre-populate from the +original `hippo` ivorycluster, the manifest would look like this: + +[source,yaml] +---- +apiVersion: ivory-operator.ivorysql.org/v1beta1 +kind: IvoryCluster +metadata: + name: rhino +spec: + image: {{< param imageIvorySQL >}} + postgresVersion: {{< param postgresVersion >}} + dataSource: + pgbackrest: + stanza: db + configuration: + - secret: + name: ivyo-s3-creds + global: + repo1-path: /pgbackrest/ivory-operator/hippo/repo1 + repo: + name: repo1 + s3: + bucket: "my-bucket" + endpoint: "s3.ca-central-1.amazonaws.com" + region: "ca-central-1" + instances: + - dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: {{< param imagePGBackrest >}} + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +---- + +=== Next Steps + +Now we've seen how to clone a cluster and perform a point-in-time-recovery, let's see how we can https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/monitoring.md[monitor] our Ivory cluster to detect and prevent issues from occurring. + +== Monitoring +While having https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/high-availability.md[high availability] and +https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/disaster-recovery.md[disaster recovery] systems in place helps in the +event of something going wrong with your IvorySQL cluster, monitoring helps you anticipate +problems before they happen. Additionally, monitoring can help you diagnose and resolve issues that +may cause degraded performance rather than downtime. + +Let's look at how IVYO allows you to enable monitoring in your cluster. + +=== Adding the Exporter Sidecar + +Let's look at how we can add the IvorySQL Exporter sidecar to your cluster using the +`kustomize/ivory` example in the https://github.com/CrunchyData/postgres-operator-examples[Postgres Operator examples] repository. + +Monitoring tools are added using the `spec.monitoring` section of the custom resource. Currently, +the only monitoring tool supported is the IvorySQL Exporter configured with https://github.com/CrunchyData/pgmonitor[pgMonitor]. + +In the `kustomize/ivory/ivory.yaml` file, add the following YAML to the spec: + +[source,yaml] +---- +monitoring: + pgmonitor: + exporter: + image: {{< param imagePostgresExporter >}} +---- + +Save your changes and run: + +[source,shell] +---- +kubectl apply -k kustomize/ivory +---- + +IVYO will detect the change and add the Exporter sidecar to all Ivory Pods that exist in your +cluster. IVYO will also do the work to allow the Exporter to connect to the database and gather +metrics that can be accessed using the https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/monitoring[IVYO Monitoring] stack. + +==== Configuring TLS Encryption for the Exporter + +IVYO allows you to configure the exporter sidecar to use TLS encryption. If you provide a custom TLS +Secret via the exporter spec: + +[source,yaml] +---- + monitoring: + pgmonitor: + exporter: + customTLSSecret: + name: hippo.tls +---- + +Like other custom TLS Secrets that can be configured with IVYO, the Secret will need to be created in +the same Namespace as your PostgresCluster. It should also contain the TLS key (`tls.key`) and TLS +certificate (`tls.crt`) needed to enable encryption. + +[source,yaml] +---- +data: + tls.crt: + tls.key: +---- + +After you configure TLS for the exporter, you will need to update your Prometheus deployment to use +TLS, and your connection to the exporter will be encrypted. Check out the https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config[Prometheus] documentation +for more information on configuring TLS for https://prometheus.io/[Prometheus]. + +=== Accessing the Metrics + +Once the IvorySQL Exporter has been enabled in your cluster, follow the steps outlined in +https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/monitoring[IVYO Monitoring] to install the monitoring stack. This will allow you to deploy a https://github.com/CrunchyData/pgmonitor[pgMonitor] +configuration of https://prometheus.io/[Prometheus], https://grafana.com/[Grafana], and https://prometheus.io/docs/alerting/latest/alertmanager/[Alertmanager] monitoring tools in Kubernetes. These +tools will be set up by default to connect to the Exporter containers on your Ivory Pods. + +=== Configurate Monitoring +While the default Kustomize install should work in most Kubernetes environments, it may be +necessary to further customize the project according to your specific needs. + +For instance, by default `fsGroup` is set to `26` for the `securityContext` defined for the +various Deployments comprising the IVYO Monitoring stack: + +[source,yaml] +---- +securityContext: + fsGroup: 26 +---- + +In most Kubernetes environments this setting is needed to ensure processes within the container +have the permissions needed to write to any volumes mounted to each of the Pods comprising the IVYO +Monitoring stack. However, when installing in an OpenShift environment (and more specifically when +using the `restricted` Security Context Constraint), the `fsGroup` setting should be removed +since OpenShift will automatically handle setting the proper `fsGroup` within the Pod's +`securityContext`. + +Additionally, within this same section it may also be necessary to modify the `supplmentalGroups` +setting according to your specific storage configuration: + +[source,yaml] +---- +securityContext: + supplementalGroups : 65534 +---- + +Therefore, the following files (located under `kustomize/monitoring`) should be modified and/or +patched (e.g. using additional overlays) as needed to ensure the `securityContext` is properly +defined for your Kubernetes environment: + +- `deploy-alertmanager.yaml` +- `deploy-grafana.yaml` +- `deploy-prometheus.yaml` + +And to modify the configuration for the various storage resources (i.e. PersistentVolumeClaims) +created by the IVYO Monitoring installer, the `kustomize/monitoring/pvcs.yaml` file can also +be modified. + +Additionally, it is also possible to further customize the configuration for the various components +comprising the IVYO Monitoring stack (Grafana, Prometheus and/or AlertManager) by modifying the +following configuration resources: + +- `alertmanager-config.yaml` +- `alertmanager-rules-config.yaml` +- `grafana-datasources.yaml` +- `prometheus-config.yaml` + +Finally, please note that the default username and password for Grafana can be updated by +modifying the Grafana Secret in file `kustomize/monitoring/grafana-secret.yaml`. + +=== Install + +Once the Kustomize project has been modified according to your specific needs, IVYO Monitoring can +then be installed using `kubectl` and Kustomize: + +[source,shell] +---- +kubectl apply -k kustomize/monitoring +---- + +=== Uninstall + +And similarly, once IVYO Monitoring has been installed, it can uninstalled using `kubectl` and +Kustomize: + +[source,shell] +---- +kubectl delete -k kustomize/monitoring +---- + +=== Next Steps + +Now that we can monitor our cluster, let's explore how https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/connection-pooling.md[connection pooling] can be enabled using IVYO and how it is helpful. + +== Connection Pooling + +Connection pooling can be helpful for scaling and maintaining overall availability between your application and the database. IVYO helps facilitate this by supporting the https://www.pgbouncer.org/[PgBouncer] connection pooler and state manager. + +Let's look at how we can a connection pooler and connect it to our application! + +=== Adding a Connection Pooler + +Let's look at how we can add a connection pooler using the `kustomize/keycloak` example in the https://github.com/IvorySQL/ivory-operator[Ivory Operator] repository examples folder. + +Connection poolers are added using the `spec.proxy` section of the custom resource. Currently, the only connection pooler supported is https://www.pgbouncer.org/[PgBouncer]. + +The only required attribute for adding a PgBouncer connection pooler is to set the `spec.proxy.pgBouncer.image` attribute. In the `kustomize/keycloak/ivory.yaml` file, add the following YAML to the spec: + +[source,yaml] +---- +proxy: + pgBouncer: + image: {{< param imageIvoryPGBouncer >}} +---- + +(You can also find an example of this in the `kustomize/examples/high-availability` example). + +Save your changes and run: + +[source,shell] +---- +kubectl apply -k kustomize/keycloak +---- + +IVYO will detect the change and create a new PgBouncer Deployment! + +That was fairly easy to set up, so now let's look at how we can connect our application to the connection pooler. + +=== Connecting to a Connection Pooler + +When a connection pooler is deployed to the cluster, IVYO adds additional information to the user Secrets to allow for applications to connect directly to the connection pooler. Recall that in this example, our user Secret is called `keycloakdb-pguser-keycloakdb`. Describe the user Secret: + +[source,shell] +---- +kubectl -n ivory-operator describe secrets keycloakdb-pguser-keycloakdb +---- + +You should see that there are several new attributes included in this Secret that allow for you to connect to your Ivory instance via the connection pooler: + +- `pgbouncer-host`: The name of the host of the PgBouncer connection pooler. + This references the https://kubernetes.io/docs/concepts/services-networking/service/[Service] of the PgBouncer connection pooler. +- `pgbouncer-port`: The port that the PgBouncer connection pooler is listening on. +- `pgbouncer-uri`: A https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING[PostgreSQL connection URI] + that provides all the information for logging into the Ivory database via the PgBouncer connection pooler. +- `pgbouncer-jdbc-uri`: A https://jdbc.postgresql.org/documentation/use/[PostgreSQL JDBC connection URI] that provides + all the information for logging into the Ivory database via the PgBouncer connection pooler using the JDBC driver. + Note that by default, the connection string disable JDBC managing prepared transactions for + https://www.pgbouncer.org/faq.html#how-to-use-prepared-statements-with-transaction-pooling[optimal use with PgBouncer]. + +Open up the file in `kustomize/keycloak/keycloak.yaml`. Update the `DB_ADDR` and `DB_PORT` values to be the following: + +[source,yaml] +---- +- name: DB_ADDR + valueFrom: { secretKeyRef: { name: keycloakdb-pguser-keycloakdb, key: pgbouncer-host } } +- name: DB_PORT + valueFrom: { secretKeyRef: { name: keycloakdb-pguser-keycloakdb, key: pgbouncer-port } } +---- + +This changes Keycloak's configuration so that it will now connect through the connection pooler. + +Apply the changes: + +[source,shell] +---- +kubectl apply -k kustomize/keycloak +---- + +Kubernetes will detect the changes and begin to deploy a new Keycloak Pod. When it is completed, Keycloak will now be connected to Ivory via the PgBouncer connection pooler! + +=== TLS + +IVYO deploys every cluster and component over TLS. This includes the PgBouncer connection pooler. If you are using your own xref:./customize-cluster.md#customize-tls[custom TLS setup], you will need to provide a Secret reference for a TLS key / certificate pair for PgBouncer in `spec.proxy.pgBouncer.customTLSSecret`. + +Your TLS certificate for PgBouncer should have a Common Name (CN) setting that matches the PgBouncer Service name. This is the name of the cluster suffixed with `-pgbouncer`. For example, for our `hippo` cluster this would be `hippo-pgbouncer`. For the `keycloakdb` example, it would be `keycloakdb-pgbouncer`. + +To customize the TLS for PgBouncer, you will need to create a Secret in the Namespace of your Ivory cluster that contains the TLS key (`tls.key`), TLS certificate (`tls.crt`) and the CA certificate (`ca.crt`) to use. The Secret should contain the following values: + +[source,yaml] +---- +data: + ca.crt: + tls.crt: + tls.key: +---- + +For example, if you have files named `ca.crt`, `keycloakdb-pgbouncer.key`, and `keycloakdb-pgbouncer.crt` stored on your local machine, you could run the following command: + +[source,shell] +---- +kubectl create secret generic -n ivory-operator keycloakdb-pgbouncer.tls \ + --from-file=ca.crt=ca.crt \ + --from-file=tls.key=keycloakdb-pgbouncer.key \ + --from-file=tls.crt=keycloakdb-pgbouncer.crt +---- + +You can specify the custom TLS Secret in the `spec.proxy.pgBouncer.customTLSSecret.name` field in your `ivorycluster.ivory-operator.ivorysql.org` custom resource, e.g.: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + customTLSSecret: + name: keycloakdb-pgbouncer.tls +---- + +=== Customizing + +The PgBouncer connection pooler is highly customizable, both from a configuration and Kubernetes deployment standpoint. Let's explore some of the customizations that you can do! + +==== Configuration + +https://www.pgbouncer.org/config.html[PgBouncer configuration] can be customized through `spec.proxy.pgBouncer.config`. After making configuration changes, IVYO will roll them out to any PgBouncer instance and automatically issue a "reload". + +There are several ways you can customize the configuration: + +- `spec.proxy.pgBouncer.config.global`: Accepts key-value pairs that apply changes globally to PgBouncer. +- `spec.proxy.pgBouncer.config.databases`: Accepts key-value pairs that represent PgBouncer https://www.pgbouncer.org/config.html#section-databases[database definitions]. +- `spec.proxy.pgBouncer.config.users`: Accepts key-value pairs that represent https://www.pgbouncer.org/config.html#section-users[connection settings applied to specific users]. +- `spec.proxy.pgBouncer.config.files`: Accepts a list of files that are mounted in the `/etc/pgbouncer` directory and loaded before any other options are considered using PgBouncer's https://www.pgbouncer.org/config.html#include-directive[include directive]. + +For example, to set the connection pool mode to `transaction`, you would set the following configuration: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + config: + global: + pool_mode: transaction +---- + +For a reference on https://www.pgbouncer.org/config.html[PgBouncer configuration] please see: + +https://www.pgbouncer.org/config.html + +==== Replicas + +IVYO deploys one PgBouncer instance by default. You may want to run multiple PgBouncer instances to have some level of redundancy, though you still want to be mindful of how many connections are going to your Ivory database! + +You can manage the number of PgBouncer instances that are deployed through the `spec.proxy.pgBouncer.replicas` attribute. + +==== Resources + +You can manage the CPU and memory resources given to a PgBouncer instance through the `spec.proxy.pgBouncer.resources` attribute. The layout of `spec.proxy.pgBouncer.resources` should be familiar: it follows the same pattern as the standard Kubernetes structure for setting https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[container resources]. + +For example, let's say we want to set some CPU and memory limits on our PgBouncer instances. We could add the following configuration: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + resources: + limits: + cpu: 200m + memory: 128Mi +---- + +As IVYO deploys the PgBouncer instances using a https://kubernetes.io/docs/concepts/workloads/controllers/deployment/[Deployment] these changes are rolled out using a rolling update to minimize disruption between your application and Ivory instances! + +==== Annotations / Labels + +You can apply custom annotations and labels to your PgBouncer instances through the `spec.proxy.pgBouncer.metadata.annotations` and `spec.proxy.pgBouncer.metadata.labels` attributes respectively. Note that any changes to either of these two attributes take precedence over any other custom labels you have added. + +==== Pod Anti-Affinity / Pod Affinity / Node Affinity + +You can control the https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity[pod anti-affinity, pod affinity, and node affinity] through the `spec.proxy.pgBouncer.affinity` attribute, specifically: + +- `spec.proxy.pgBouncer.affinity.nodeAffinity`: controls node affinity for the PgBouncer instances. +- `spec.proxy.pgBouncer.affinity.podAffinity`: controls Pod affinity for the PgBouncer instances. +- `spec.proxy.pgBouncer.affinity.podAntiAffinity`: controls Pod anti-affinity for the PgBouncer instances. + +Each of the above follows the https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity[standard Kubernetes specification for setting affinity]. + +For example, to set a preferred Pod anti-affinity rule for the `kustomize/keycloak` example, you would want to add the following to your configuration: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/cluster: keycloakdb + ivory-operator.ivorysql.org/role: pgbouncer + topologyKey: kubernetes.io/hostname +---- + +==== Tolerations + +You can deploy PgBouncer instances to https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[Nodes with Taints] by setting https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[Tolerations] through `spec.proxy.pgBouncer.tolerations`. This attribute follows the Kubernetes standard tolerations layout. + +For example, if there were a set of Nodes with a Taint of `role=connection-poolers:NoSchedule` that you want to schedule your PgBouncer instances to, you could apply the following configuration: + +[source,yaml] +---- +spec: + proxy: + pgBouncer: + tolerations: + - effect: NoSchedule + key: role + operator: Equal + value: connection-poolers +---- + +Note that setting a toleration does not necessarily mean that the PgBouncer instances will be assigned to Nodes with those taints. Tolerations act as a *key*: they allow for you to access Nodes. If you want to ensure that your PgBouncer instances are deployed to specific nodes, you need to combine setting tolerations with node affinity. + +==== Pod Spread Constraints + +Besides using affinity, anti-affinity and tolerations, you can also set https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/[Topology Spread Constraints] through `spec.proxy.pgBouncer.topologySpreadConstraints`. This attribute follows the Kubernetes standard topology spread contraint layout. + +For example, since each of of our pgBouncer Pods will have the standard `ivory-operator.ivorysql.org/role: pgbouncer` Label set, we can use this Label when determining the `maxSkew`. In the example below, since we have 3 nodes with a `maxSkew` of 1 and we've set `whenUnsatisfiable` to `ScheduleAnyway`, we should ideally see 1 Pod on each of the nodes, but our Pods can be distributed less evenly if other constraints keep this from happening. + +[source,yaml] +---- + proxy: + pgBouncer: + replicas: 3 + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: my-node-label + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: + ivory-operator.ivorysql.org/role: pgbouncer +---- + +If you want to ensure that your PgBouncer instances are deployed more evenly (or not deployed at all), you need to update `whenUnsatisfiable` to `DoNotSchedule`. + +=== Next Steps + +Now that we can enable connection pooling in a cluster, Let's explore some https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/administrative-tasks.md[administrative tasks] such as manually restarting IvorySQL using IVYO. How do we do that? + +== Administrative Tasks + +=== Manually Restarting IvorySQL + +There are times when you might need to manually restart IvorySQL. This can be done by adding or updating a custom annotation to the cluster's `spec.metadata.annotations` section. IVYO will notice the change and perform a https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/high-availability.md#rolling-update[rolling restart]. + +For example, if you have a cluster named `hippo` in the namespace `ivory-operator`, all you need to do is patch the hippo ivorycluster with the following: + +[source,shell] +---- +kubectl patch ivorycluster/hippo -n ivory-operator --type merge \ + --patch '{"spec":{"metadata":{"annotations":{"restarted":"'"$(date)"'"}}}}' +---- + +Watch your hippo cluster: you will see the rolling update has been triggered and the restart has begun. + +=== Shutdown + +You can shut down an Ivory cluster by setting the `spec.shutdown` attribute to `true`. You can do this by editing the manifest, or, in the case of the `hippo` cluster, executing a command like the below: + +[source,shell] +---- +kubectl patch ivorycluster/hippo -n ivory-operator --type merge \ + --patch '{"spec":{"shutdown": true}}' +---- + +The effect of this is that all the Kubernetes workloads for this cluster are +scaled to 0. You can verify this with the following command: + +[source,shell] +---- +kubectl get deploy,sts,cronjob --selector=ivory-operator.ivorysql.org/cluster=hippo -n ivory-operator + +NAME READY AGE +statefulset.apps/hippo-00-lwgx 0/0 1h + +NAME SCHEDULE SUSPEND ACTIVE +cronjob.batch/hippo-repo1-full @daily True 0 +---- + +To turn an Ivory cluster that is shut down back on, you can set `spec.shutdown` to `false`. + +=== Pausing Reconciliation and Rollout + +You can pause the Ivory cluster reconciliation process by setting the +`spec.paused` attribute to `true`. You can do this by editing the manifest, or, +in the case of the `hippo` cluster, executing a command like the below: + +[source,shell] +---- +kubectl patch ivorycluster/hippo -n ivory-operator --type merge \ + --patch '{"spec":{"paused": true}}' +---- + +Pausing a cluster will suspend any changes to the cluster's current state until +reconciliation is resumed. This allows you to fully control when changes to +the ivorycluster spec are rolled out to the Ivory cluster. While paused, +no statuses are updated other than the "Progressing" condition. + +To resume reconciliation of an Ivory cluster, you can either set `spec.paused` +to `false` or remove the setting from your manifest. + +=== Rotating TLS Certificates + +Credentials should be invalidated and replaced (rotated) as often as possible +to minimize the risk of their misuse. Unlike passwords, every TLS certificate +has an expiration, so replacing them is inevitable. + +In fact, IVYO automatically rotates the client certificates that it manages *before* +the expiration date on the certificate. A new client certificate will be generated +after 2/3rds of its working duration; so, for instance, a IVYO-created certificate +with an expiration date 12 months in the future will be replaced by IVYO around the +eight month mark. This is done so that you do not have to worry about running into +problems or interruptions of service with an expired certificate. + +==== Triggering a Certificate Rotation + +If you want to rotate a single client certificate, you can regenerate the certificate +of an existing cluster by deleting the `tls.key` field from its certificate Secret. + +Is it time to rotate your IVYO root certificate? All you need to do is delete the `ivyo-root-cacert` secret. IVYO will regenerate it and roll it out seamlessly, ensuring your apps continue communicating with the Ivory cluster without having to update any configuration or deal with any downtime. + +[source,bash] +---- +kubectl delete secret ivyo-root-cacert +---- + +[NOTE] +==== +IVYO only updates secrets containing the generated root certificate. It does not touch custom certificates. +==== + +==== Rotating Custom TLS Certificates + +When you use your own TLS certificates with IVYO, you are responsible for replacing them appropriately. +Here's how. + +IVYO automatically detects and loads changes to the contents of IvorySQL server +and replication Secrets without downtime. You or your certificate manager need +only replace the values in the Secret referenced by `spec.customTLSSecret`. + +If instead you change `spec.customTLSSecret` to refer to a new Secret or new fields, +IVYO will perform a https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/architecture/high-availability.md#rolling-update[rolling restart]. + +[IMPORTANT] +==== +When changing the IvorySQL certificate authority, make sure to update +https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/customize-cluster.md#customize-tls[`customReplicationTLSSecret`] as well. +==== + +=== Changing the Primary + +There may be times when you want to change the primary in your HA cluster. This can be done +using the `patroni.switchover` section of the ivorycluster spec. It allows +you to enable switchovers in your ivoryclusters, target a specific instance as the new +primary, and run a failover if your ivorycluster has entered a bad state. + +Let's go through the process of performing a switchover! + +First you need to update your spec to prepare your cluster to change the primary. Edit your spec +to have the following fields: + +[source,yaml] +---- +spec: + patroni: + switchover: + enabled: true +---- + +After you apply this change, IVYO will be looking for the trigger to perform a switchover in your +cluster. You will trigger the switchover by adding the `ivory-operator.ivorysql.org/trigger-switchover` +annotation to your custom resource. The best way to set this annotation is +with a timestamp, so you know when you initiated the change. + +For example, for our `hippo` cluster, we can run the following command to trigger the switchover: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo \ + ivory-operator.ivorysql.org/trigger-switchover="$(date)" +---- + +[TIP] +==== +If you want to perform another switchover you can re-run the annotation command and add the `--overwrite` flag: + +[source,shell] +---- +kubectl annotate -n ivory-operator ivorycluster hippo --overwrite \ + ivory-operator.ivorysql.org/trigger-switchover="$(date)" +---- +==== + +IVYO will detect this annotation and use the Patroni API to request a change to the current primary! + +The roles on your database instance Pods will start changing as Patroni works. The new primary +will have the `master` role label, and the old primary will be updated to `replica`. + +The status of the switch will be tracked using the `status.patroni.switchover` field. This will be set +to the value defined in your trigger annotation. If you use a timestamp as the annotation this is +another way to determine when the switchover was requested. + +After the instance Pod labels have been updated and `status.patroni.switchover` has been set, the +primary has been changed on your cluster! + +[IMPORTANT] +==== +After changing the primary, we recommend that you disable switchovers by setting `spec.patroni.switchover.enabled` +to false or remove the field from your spec entirely. If the field is removed the corresponding +status will also be removed from the ivorycluster. +==== + + +==== Targeting an instance + +Another option you have when switching the primary is providing a target instance as the new +primary. This target instance will be used as the candidate when performing the switchover. +The `spec.patroni.switchover.targetInstance` field takes the name of the instance that you are switching to. + +This name can be found in a couple different places; one is as the name of the StatefulSet and +another is on the database Pod as the `ivory-operator.ivorysql.org/instance` label. The +following commands can help you determine who is the current primary and what name to use as the +`targetInstance`: + +[source,shell-session] +---- +$ kubectl get pods -l ivory-operator.ivorysql.org/cluster=hippo \ + -L ivory-operator.ivorysql.org/instance \ + -L ivory-operator.ivorysql.org/role -n ivory-operator + +NAME READY STATUS RESTARTS AGE INSTANCE ROLE +hippo-instance1-jdb5-0 3/3 Running 0 2m47s hippo-instance1-jdb5 master +hippo-instance1-wm5p-0 3/3 Running 0 2m47s hippo-instance1-wm5p replica +---- + +In our example cluster `hippo-instance1-jdb5` is currently the primary meaning we want to target +`hippo-instance1-wm5p` in the switchover. Now that you know which instance is currently the +primary and how to find your `targetInstance`, let's update your cluster spec: + +[source,yaml] +---- +spec: + patroni: + switchover: + enabled: true + targetInstance: hippo-instance1-wm5p +---- + +After applying this change you will once again need to trigger the switchover by annotating the +ivorycluster (see above commands). You can verify the switchover has completed by checking the +Pod role labels and `status.patroni.switchover`. + +==== Failover + +Finally, we have the option to failover when your cluster has entered an unhealthy state. The +only spec change necessary to accomplish this is updating the `spec.patroni.switchover.type` +field to the `Failover` type. One note with this is that a `targetInstance` is required when +performing a failover. Based on the example cluster above, assuming `hippo-instance1-wm5p` is still +a replica, we can update the spec: + +[source,yaml] +---- +spec: + patroni: + switchover: + enabled: true + targetInstance: hippo-instance1-wm5p + type: Failover +---- + +Apply this spec change and your ivorycluster will be prepared to perform the failover. Again +you will need to trigger the switchover by annotating the ivorycluster (see above commands) +and verify that the Pod role labels and `status.patroni.switchover` are updated accordingly. + +[WARNING] +==== +Errors encountered in the switchover process can leave your cluster in a bad +state. If you encounter issues, found in the operator logs, you can update the spec to fix the +issues and apply the change. Once the change has been applied, IVYO will attempt to perform the +switchover again. +==== + +=== Next Steps + +We've covered a lot in terms of building, maintaining, scaling, customizing, restarting, and expanding our Ivory cluster. However, there may come a time where we need to https://github.com/IvorySQL/ivory-operator/blob/master/docs/content/tutorial/delete-cluster.md[delete our Ivory cluster]. How do we do that? + +== Delete an Ivory Cluster + +There comes a time when it is necessary to delete your cluster. If you have been following along with the example, you can delete your Ivory cluster by simply running: + +[source,shell] +---- +kubectl delete -k examples/kustomize/ivory +---- + +IVYO will remove all of the objects associated with your cluster. + +With data retention, this is subject to the https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming[retention policy of your PVC]. For more information on how Kubernetes manages data retention, please refer to the https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming[Kubernetes docs on volume reclaiming]. diff --git a/EN/modules/ROOT/pages/master/4.6.3.adoc b/EN/modules/ROOT/pages/master/4.6.3.adoc new file mode 100644 index 0000000..f438c2d --- /dev/null +++ b/EN/modules/ROOT/pages/master/4.6.3.adoc @@ -0,0 +1,192 @@ + +:sectnums: +:sectnumlevels: 5 + += Docker Swarm & Docker Compose Deploying IvorySQL High Availability Cluster + +Prepare three servers with network connectivity and set up a Swarm cluster. +The test cluster names and corresponding IP addresses are as follows: + +manager-node1: 192.168.21.205 + +manager-node2: 192.168.21.164 + +manager-node3: 192.168.21.51 + +``` +[root@manager-node1 docker-swarm]# docker node ls +ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION +y9d9wd9t2ncy4t9bvw6bg9sjs * manager-node1 Ready Active Reachable 26.1.4 +iv17o6m9t9e06vd9iu1o6damd manager-node2 Ready Active Leader 25.0.4 +vjnax76qj812mlvut6cv4qotl manager-node3 Ready Active Reachable 24.0.6 +``` + +== Building IvorySQL HA Cluster using Docker Swarm +Download the source code +``` +[root@manager-node1 ~]# git clone https://github.com/IvorySQL/docker_library.git +[root@manager-node1 ~]# cd docker_library/docker-cluster/docker-swarm +``` + +Deploy a three-node etcd cluster +``` +[root@manager-node1 docker-swarm]# docker stack deploy -c docker-swarm-etcd.yml ivoryhac-etcd +Creating network ivoryhac-etcd_etcd-net +Creating service ivoryhac-etcd_etcd3 +Creating service ivoryhac-etcd_etcd1 +Creating service ivoryhac-etcd_etcd2 +[root@manager-node1 docker-swarm]# docker service ls +ID NAME MODE REPLICAS IMAGE PORTS +1jst0mva8o5n ivoryhac-etcd_etcd1 replicated 1/1 quay.io/coreos/etcd:v3.5.8 *:2379-2380->2379-2380/tcp +sosag5017cis ivoryhac-etcd_etcd2 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +8twpgkzo2mnx ivoryhac-etcd_etcd3 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +``` +You can customize the external database directory by modifying the volumes section in docker-swarm-ivypatroni.yml. After making changes, adjust the directory permissions and ownership accordingly. Example as follows: +``` +mkdir -p /home/ivorysql/{data,patroni} +chown -R 1000:1000 /home/ivorysql/{data,patroni} +chmod 700 /home/ivorysql/{data,patroni} +``` + +Deploy an IvorySQL High Availability Cluster +``` +[root@manager-node1 docker-swarm]# docker stack deploy -c docker-swarm-ivypatroni.yml ivoryhac-patroni +Since --detach=false was not specified, tasks will be created in the background. +In a future release, --detach=false will become the default. +Creating service ivoryhac-patroni_ivypatroni1 +Creating service ivoryhac-patroni_ivypatroni2 +[root@manager-node1 docker-swarm]# docker service ls +ID NAME MODE REPLICAS IMAGE PORTS +1jst0mva8o5n ivoryhac-etcd_etcd1 replicated 1/1 quay.io/coreos/etcd:v3.5.8 *:2379-2380->2379-2380/tcp +sosag5017cis ivoryhac-etcd_etcd2 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +8twpgkzo2mnx ivoryhac-etcd_etcd3 replicated 1/1 quay.io/coreos/etcd:v3.5.8 +uzdvjq5j2gwt ivorysql-hac_ivypatroni1 replicated 1/1 ivorysql/docker-swarm-ha-cluster:5.0-4.0.6-ubi8 *:1521->1521/tcp, *:5866->5866/tcp +fr0m9chu3ce8 ivorysql-hac_ivypatroni2 replicated 1/1 ivorysql/docker-swarm-ha-cluster:5.0-4.0.6-ubi8 *:1522->1521/tcp, *:5867->5866/tcp +``` + +Connect to the database using psql via Oracle port and PostgreSQL port +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p1521 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# exit +``` +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p5432 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) +``` + +Uninstall the IvorySQL cluster +``` +[root@manager-node1 ~] docker stack rm ivoryhac-patroni +[root@manager-node1 ~] docker stack rm ivoryhac-etcd +``` + +== Set up an IvorySQL HA Cluster using Docker Compose + +Download the source code +``` +[root@manager-node1 ~]# git clone https://github.com/IvorySQL/docker_library.git +[root@manager-node1 ~]# cd docker_library/docker-cluster/docker-compose +``` +Copy files to another server + +Copy the etcd and ivypatroni Docker Compose files to other servers respectively. + +For example, to the test server: + +192.168.21.205 will host etcd1+ivorypatroni1, + +192.168.21.164 will host etcd2+ivorypatroni2, + +192.168.21.51 will host etcd3+ivorypatroni3 + +Deploy a three-node etcd cluster, taking node1 as an example +``` +[root@manager-node1 docker-compose]# docker-compose -f ./docker-compose-etcd1.yml up -d +[+] Running 1/1 + ✔ Container etcd Started 0.1s + +``` + +Deploy an IvorySQL high-availability cluster. + +Deploy the ivyhac service on each node, using node1 as an example. +``` +[root@manager-node1 docker-compose]# docker-compose -f ./docker-compose-ivypatroni_1.yml up -d +[+] Running 1/1 + ✔ Container ivyhac1 Started 0.1s +[root@manager-node1 docker-compose]# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +736c0d188bdd ivorysql/docker-compose-ha-cluster:5.0-4.0.6-ubi8 "/bin/sh /docker-ent…" 18 seconds ago Up 17 seconds ivyhac1 +9d8e04e4f819 quay.io/coreos/etcd:v3.5.8 "/usr/local/bin/etcd" 24 minutes ago Up 24 minutes etcd + +``` + +At this point, the one-primary-two-standby cluster setup is complete. +Connect to the database using psql via Oracle-compatible ports and PostgreSQL ports. +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p1521 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + oracle +(1 row) + +ivorysql=# exit +``` +``` +[root@manager-node1 docker-swarm]# psql -h127.0.0.1 -p5432 -U ivorysql -d ivorysql +Password for user ivorysql: + +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# show ivorysql.compatible_mode; + ivorysql.compatible_mode +-------------------------- + pg +(1 row) + +``` + +Uninstall the IvorySQL cluster, using node1 as an example. +``` +[root@manager-node1 ~] docker-compose -f ./docker-compose-ivypatroni_1.yml down +[root@manager-node1 ~] docker-compose -f ./docker-compose-etcd1.yml down +``` \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/4.6.4.adoc b/EN/modules/ROOT/pages/master/4.6.4.adoc new file mode 100644 index 0000000..d3255e3 --- /dev/null +++ b/EN/modules/ROOT/pages/master/4.6.4.adoc @@ -0,0 +1,71 @@ + +:sectnums: +:sectnumlevels: 5 + += Docker & Podman deployment IvorySQL + +== Running IvorySQL in docker + +** Get IvorySQL image from Docker Hub +``` +$ docker pull ivorysql/ivorysql:5.0-ubi8 +``` + +** Running IvorySQL +``` +$ docker run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=your_password -d ivorysql/ivorysql:5.0-ubi8 +``` + +** Check if the IvorySQL container is running successfully +``` +$ docker ps | grep ivorysql +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6faa2d0ed705 ivorysql:5.0-ubi8 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5866/tcp, 0.0.0.0:5434->5432/tcp ivorysql +``` + +== Running with Podman + +** Pull IvorySQL Image from Docker Hub +``` +[highgo@manager-node1 ~]$ podman pull ivorysql/ivorysql:5.0-ubi8 +✔ docker.io/ivorysql/ivorysql:5.0-ubi8 +Trying to pull docker.io/ivorysql/ivorysql:5.0-ubi8... +Getting image source signatures +Copying blob 5885448c5c88 done | +Copying blob 6c502b378234 done | +Copying blob 8b4f2b90d6b6 done | +Copying blob 9b000f2935f6 done | +Copying blob 806f782da874 done | +Copying blob e4c51845a9eb done | +Copying blob dcb1e9a04275 done | +Copying blob 285a279173f8 done | +Copying blob 1f6f247b9ae0 done | +Copying blob 3cc81bed8614 done | +Copying blob 863c87bf25eb done | +Copying blob 4f4fb700ef54 done | +Copying config 88e1bbeda8 done | +Writing manifest to image destination +88e1bbeda81c51d88e12cbd2b19730498f1343d1c64bb3dddc8ffcb08a1f965f +``` + +** Run IvorySQL Container +``` +$ podman run --name ivorysql -p 5434:5432 -e IVORYSQL_PASSWORD=123456 -d ivorysql/ivorysql:5.0-ubi8 +``` + +** Check if IvorySQL Container is Running Successfully +``` +[highgo@manager-node1 ~]$ podman ps | grep ivorysql +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +368dee58d5ef docker.io/ivorysql/ivorysql:5.0-ubi8 postgres 20 seconds ago Up 20 seconds 0.0.0.0:5434->5432/tcp, 1521/tcp, 5866/tcp ivorysql + +[highgo@manager-node1 ~]$ podman exec -it ivorysql /bin/bash +[root@8cc631eb413d /]# +ivorysql=# select version(); + version +------------------------------------------------------------------------------------------------------------------------ + PostgreSQL 18.0 (IvorySQL 5.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-28), 64-bit +(1 row) + +ivorysql=# +``` \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/4.7.1.adoc b/EN/modules/ROOT/pages/master/4.7.1.adoc new file mode 100644 index 0000000..cd9d9b1 --- /dev/null +++ b/EN/modules/ROOT/pages/master/4.7.1.adoc @@ -0,0 +1,383 @@ +:sectnums: +:sectnumlevels: 5 +:imagesdir: ./_images + += Installation Guide + +The IvorySQL Cloud platform is a comprehensive solution that integrates the IvorySQL database and its surrounding ecosystem to deliver end-to-end database and resource management capabilities. Before starting the installation, compile and install the following projects from GitHub: + +Frontend: https://github.com/IvorySQL/ivory-cloud-web + +Backend: https://github.com/IvorySQL/ivory-cloud + +Prepare a Kubernetes cluster (version 1.23) and install `ivory-operator` on the master node: + +https://github.com/IvorySQL/ivory-operator/tree/IVYO_REL_5_STABLE + +== IvorySQL Cloud Platform Installation + +The IvorySQL Cloud platform currently supports installation on Linux systems. The required packages are listed below. + +[width="99%",cols="<28%,<72%",options="header"] +|=== +|Component |Package +|Frontend |dist +|Backend |cloudnative-1.0-SNAPSHOT.jar +|K8S cluster a| +[arabic] +. docker.io/ivorysql/ivory-operator:v5.0 +. docker.io/ivorysql/pgadmin:ubi8-9.9-5.0-1 +. docker.io/ivorysql/pgbackrest:ubi8-2.56.0-5.0-1 +. docker.io/ivorysql/postgres-exporter:ubi8-0.17.0-5.0-1 +. docker.io/ivorysql/ivorysql:ubi8-5.0-5.0-1 +|=== + +In addition, install the following supporting components: + +* *Backend database*: Stores and manages all data related to cloud resources, user information, access control, billing, and more. Use a PostgreSQL-compatible database such as PostgreSQL, HighGo DB, or IvorySQL. +* *NGINX*: Hosts the web user interface of the cloud platform. + +== Pre-installation Checklist + +Complete the preparation steps on every server before installation. Deploy IvorySQL Cloud on a Kubernetes (1.23) cluster that already has a default storage class. + +=== Disable the firewall + +Disable the firewall on every server to ensure full connectivity. + +[literal] +---- +systemctl stop firewalld.service + +systemctl disable firewalld.service +---- + +=== Backend deployment + +[[backend-db]] +==== Backend database + +Install the backend database yourself by following the instructions on the IvorySQL official website. + +==== Backend services + +===== Compile the backend service + +[literal] +---- +# Clone the code + +git clone https://github.com/IvorySQL/ivory-cloud.git + +# Go to the project root + +cd ivory-cloud +---- + +Ensure that every `.sh` file in `ivory-cloud\cloudnative\src\main\resources\monitor` and its subdirectories uses the Unix format. If not, run `dos2unix` to convert them. + +[literal] +---- +dos2unix cloudnative\src\main\resources\monitor\* + +# Build + +mvn clean + +mvn package -D maven.test.skip=true + +The packaged artifact `cloudnative-1.0-SNAPSHOT.jar` can be found under `ivory-cloud/cloudnative/target`. +---- + +===== Deploy the service + +[literal] +---- +Execute the following steps on the Kubernetes server: + +# Create a working directory + +mkdir -p /home/ivory + +# Upload `ivory-cloud/cloudnative/target/cloudnative-1.0-SNAPSHOT.jar` to the directory created above + +# Configuration files + +## Create a configuration directory + +mkdir -p /home/ivory/config + +## Upload configuration files + +Copy the following files from `ivory-cloud/cloudnative/src/main/resources` to `/home/ivory/config`: + +application.yaml + +application-native.yaml + +spring_pro_logback.xml + +## Update the configuration + +Replace `url`, `username`, and `password` with the database information configured in <>. + +## /home/ivory/config/application-native.yaml + +datasource: + +druid: + +db-type: com.alibaba.druid.pool.DruidDataSource + +driver-class-name: org.postgresql.Driver + +url: jdbc:postgresql://127.0.0.1:5432/ivorysql + +username: ivorysql + +password: "ivory@123" +---- + +==== Start the backend service + +[literal] +---- +# Install JDK 1.8 + +yum install -y java-1.8.0-openjdk.x86_64 + +[root@cloud ivory]# pwd + +/home/ivory/ + +[root@cloud ivory]# nohup java -jar cloudnative-1.0-SNAPSHOT.jar > log_native 2>&1 & + +[root@cloud ivory]# ps -ef | grep java + +root 77494 1 0 Oct09 ? 00:03:07 java -jar cloudnative-1.0-SNAPSHOT.jar +---- + +=== Frontend deployment + +==== Compile the frontend + +[literal] +---- +## Fetch the code + +git clone https://github.com/IvorySQL/ivory-cloud-web.git + +## Go to the project root + +cd ivorysql-cloud-web + +## Install dependencies + +npm install + +## Build for production + +npm run build:prod +---- + +==== Update directory and file permissions + +[literal] +---- +# Create a deployment directory + +[root@cloud opt]# mkdir -p /opt/cloud/web + +# Copy the generated `dist` folder to /opt/cloud/web + +# Grant permissions + +[root@cloud web]# chmod 755 /opt/cloud/web/dist + +[root@cloud web]# chmod -R 777 /opt/cloud/web/dist +---- + +==== Update config.js + +Edit the configuration file: + +[literal] +---- +[root@cloud dist]# pwd + +/home/cloud/web/dist + +[root@cloud dist]# vi config.js + +var PLATFROM_CONFIG = {}; + +// Replace with the IP address of the current server + +PLATFROM_CONFIG.baseUrl = "http://192.168.31.43:8081/cloudapi/api/v1" + +// true: show the “Register” button on the login page + +// false: hide the “Register” button on the login page + +globalShowRegister = true + +// Hide the cloud-native database? true: hide; false: show + +disableNative = false + +// Database type + +dbtype = "IvorySQL" + +dbversion = "5.0" +---- + +=== Install and configure NGINX + +The IvorySQL Cloud host must have NGINX installed to serve the web interface. Users can select any installation method; the following steps are provided for reference. + +==== Download the NGINX source package + +[literal] +---- +[root@cloud web]# wget https://nginx.org/download/nginx-1.20.1.tar.gz + +[root@cloud web]# ls -lrt + +total 3924 + +-rwxrwxr-x. 1 root root 1061461 May 25 2021 nginx-1.20.1.tar.gz + +-rwxrwxr-x. 1 root root 2943732 Oct 9 16:43 dist.tar.gz + +drwxrwxrwx. 4 root root 103 Oct 21 13:20 dist +---- + +==== Install dependencies + +[literal] +---- +[root@host30 cloud]# yum -y install pcre-devel + +[root@host30 cloud]# yum -y install openssl openssl-devel +---- + +==== Build and install NGINX + +NGINX is installed under the directory specified by `--prefix` during `configure`. The example below installs it to `/opt/cloud/nginx`. + +[literal] +---- +## Extract nginx-1.20.1.tar.gz + +[root@cloud web]# tar -zxvf nginx-1.20.1.tar.gz + +## Verify that nginx-1.20.1 was created + +[root@cloud web]# ls -lrt + +total 3924 + +-rwxrwxr-x. 1 root root 1061461 May 25 2021 nginx-1.20.1.tar.gz + +-rwxrwxr-x. 1 root root 2943732 Oct 9 16:43 dist.tar.gz + +drwxrwxr-x. 9 1001 1001 186 Oct 9 16:53 nginx-1.20.1 + +drwxrwxrwx. 4 root root 103 Oct 21 13:20 dist + +## Configure + +[root@cloud web]# cd nginx-1.20.1 + +[root@cloud nginx-1.20.1]# ./configure --prefix=/opt/cloud/nginx --with-http_ssl_module + +## Compile and install + +[root@cloud nginx-1.20.1]# make + +[root@cloud nginx-1.20.1]# make install +---- + +==== Update nginx.conf + +The configuration file is stored under `/opt/cloud/nginx`. Adjust it according to the README on GitHub, and replace the IP with the address of the current server. + +[literal] +---- +server { + +listen 9104; + +server_name 192.168.31.43; + +location / { + +root /opt/cloud/web/dist; + +index index.html index.htm; + +} + +error_page 500 502 503 504 /50x.html; + +location = /50x.html { + +root html; + +} + +} +---- + +==== Start NGINX + +[literal] +---- +[root@cloud sbin]# pwd + +/opt/cloud/nginx/sbin + +[root@cloud sbin]# ./nginx -c /opt/cloud/nginx/conf/nginx.conf + +[root@cloud sbin]# ps -ef | grep nginx + +root 2179 131037 0 09:46 pts/1 00:00:00 grep --color=auto nginx + +root 55047 1 0 Oct21 ? 00:00:00 nginx: master process ./nginx -c /opt/cloud/nginx/conf/nginx.conf + +nobody 55048 55047 0 Oct21 ? 00:00:00 nginx: worker process +---- + +=== Operator deployment + +Set up the Kubernetes cluster yourself. This section describes how to install `ivory-operator` on the cluster and preload container images. + +==== Install ivory-operator + +Refer to the README on GitHub: + +https://github.com/IvorySQL/ivory-operator/tree/IVYO_REL_5_STABLE[https://github.com/IvorySQL/ivory-operator/tree/IVYO_REL_5_STABLE] + +==== Load container images + +If your servers have direct access to Docker Hub, you can skip this step. Otherwise, preload the following images on every node in the Kubernetes cluster. + +[literal] +---- +docker.io/ivorysql/pgadmin:ubi8-9.9-5.0-1 + +docker.io/ivorysql/pgbackrest:ubi8-2.56.0-5.0-1 + +docker.io/ivorysql/pgbouncer:ubi8-1.23.0-5.0-1 + +docker.io/ivorysql/postgres-exporter:ubi8-0.17.0-5.0-1 + +docker.io/ivorysql/ivorysql:ubi8-5.0-5.0-1 + +docker.io/prom/prometheus:v2.33.5 + +docker.io/prom/alertmanager:v0.22.2 + +docker.io/grafana/grafana:8.5.10 +---- \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/4.7.2.adoc b/EN/modules/ROOT/pages/master/4.7.2.adoc new file mode 100644 index 0000000..7215f53 --- /dev/null +++ b/EN/modules/ROOT/pages/master/4.7.2.adoc @@ -0,0 +1,234 @@ +:sectnums: +:sectnumlevels: 5 +:imagesdir: ../../images + += User Guide + +IvorySQL Cloud is a web-based service platform that can be accessed from any computer through a browser. After installing the cloud service platform on the server with IP `192.168.31.43`, open a browser and enter `http://192.168.31.43:9104/` (9104 is the port configured in `nginx.conf.default`) to reach the login page: + +image::media/image3.png[image3,width=274,height=355] + +== Sign In and Sign Out + +=== Sign In + +On the login page, enter the prompted information to access the IvorySQL Cloud service platform: + +image::media/image4.png[image4,width=552,height=272] + +=== Sign Out + +Click the avatar in the upper-right corner to display the current username and the **Log Out** action. Click **Log Out** to exit; click the username to stay on the current page: + +image::media/image5.png[image5,width=552,height=62] + +== Administrator Features + +=== Add a Cluster + +[arabic] +. After signing in as the `admin` user, click **K8S Cluster Management** in the left navigation bar to open the cluster list. + +image::media/image6.png[image6,width=601,height=91] + +[arabic, start=2] +. Click **Add Kubernetes Cluster** in the upper-left corner, fill in the cluster information, and submit. + +image::media/image7.png[image7,width=333,height=291] + +=== Manage Clusters + +The cluster management page lists details for each cluster and allows you to edit or delete them. + +image::media/image8.png[image8,width=553,height=82] + +== demo User Features + +=== Database Subscription + +[arabic] +. Sign in with the `demo` user. +. Click **Database Subscription** in the left navigation, fill in the database parameters, and click **Next: Confirm**. + +image::media/image9.png[image9,width=552,height=272] + +[arabic, start=3] +. Review the information and click **Confirm**. + +image::media/image10.png[image10,width=552,height=272] + +[arabic, start=4] +. After confirming, the page automatically redirects to **Database Management** to show the subscription task. + +image::media/image11.png[image11,width=552,height=77] + +image::media/image12.png[image12,width=552,height=79] + +=== Database Management + +Displays all databases managed by the cloud service platform. + +image::media/image12.png[image12,width=552,height=79] + +=== Restart a Database + +[arabic] +. Sign in with the `demo` user. +. Go to **Database Management**, select a database, click **More** in the **Actions** column, and choose **Restart**. + +image::media/image13.png[image13,width=79,height=286] + +[arabic, start=3] +. Review the information and click **Confirm**. + +image::media/image14.png[image14,width=553,height=210] + +=== Change the Password + +[arabic] +. Sign in with the `demo` user. +. Go to **Database Management**, select a database, and click its **Instance ID**. + +image::media/image15.png[image15,width=553,height=48] + +[arabic, start=3] +. On the database details page, click the password icon. + +image::media/image17.png[image17,width=553,height=173] + +[arabic, start=4] +. Enter a new password and click **Confirm**. + +image::media/image18.png[image18,width=553,height=352] + +=== Delete an Instance + +[arabic] +. Sign in with the `demo` user. +. Go to **Database Management**, select a database, click **More** in the **Actions** column, and choose **Delete Instance**. + +image::media/image19.png[image19,width=55,height=201] + +[arabic, start=3] +. Review the confirmation window and click **Confirm**. + +image::media/image20.png[image20,width=552,height=207] + +=== Storage Expansion + +[arabic] +. This feature requires additional plug-ins such as TopoLVM. +. Sign in with the `demo` user. +. Click **Storage Expansion**, select a database, then click **Edit** in the **Actions** column; alternatively, go to **Database Management**, click **More**, and choose **Storage Expansion**. + +image::media/image21.png[image21,width=552,height=197] + +image::media/image22.png[image22,width=63,height=200] + +[arabic, start=4] +. Enter the expanded storage size and click **Confirm**. + +image::media/image23.png[image23,width=553,height=242] + +=== Specification Change + +[arabic] +. Sign in with the `demo` user. +. Click **Specification Change**, select a database, then click **Edit** in the **Actions** column; or go to **Database Management**, click **More**, and choose **Specification Change**. + +image::media/image24.png[image24,width=552,height=196] + +image::media/image25.png[image25,width=59,height=205] + +[arabic, start=3] +. Enter the new specification and click **Confirm**. + +image::media/image26.png[image26,width=552,height=240] + +=== Database Backup + +[arabic] +. Sign in with the `demo` user. +. Go to **Database Backup**, select a database, and click **Backup** in the **Actions** column; or go to **Database Management**, click **More**, and choose **Backup**. + +image::media/image27.png[image27,width=552,height=197] + +image::media/image28.png[image28,width=64,height=199] + +[arabic, start=3] +. Enter a backup name and click **Confirm**. + +image::media/image29.png[image29,width=552,height=285] + +=== Database Restore + +[arabic] +. Sign in with the `demo` user. +. Go to **Database Restore**, select a database, and click **View** in the **Actions** column; or go to **Database Management**, click **More**, and choose **Restore**. + +image::media/image30.png[image30,width=552,height=196] + +image::media/image31.png[image31,width=58,height=201] + +[arabic, start=3] +. Select the backup file and click **Restore** in the **Actions** column. + +image::media/image32.png[image32,width=552,height=304] + +image::media/image33.png[image33,width=552,height=305] + +[arabic, start=4] +. Enter the target database information. Use the database password from before the backup. + +image::media/image34.png[image34,width=552,height=246] + +[arabic, start=5] +. Continue following the workflow described in “4.1 Database Subscription”. + +=== Database Monitoring + +[arabic] +. Sign in with the `demo` user. +. Click **Monitoring Tools** in the left navigation and select the cluster where the database resides; or go to **Database Management**, select a database, click **More**, and choose **Monitoring**. + +image::media/image35.png[image35,width=552,height=261] + +image::media/image36.jpeg[image36,width=65,height=215] + +[arabic, start=3] +. After the monitoring stack is created, repeat step (2) to open the monitoring page, then sign in with `admin/admin` and click **Login**. + +____ +image::media/image37.png[image37,width=552,height=272] +____ + +[arabic, start=4] +. Click the magnifying glass icon to view monitoring metrics. + +image::media/image39.png[image39,width=553,height=264] + +image::media/image40.png[image40,width=552,height=261] + +image::media/image41.png[image41,width=553,height=261] + +image::media/image42.png[image42,width=552,height=72] + +image::media/image43.png[image43,width=552,height=259] + +image::media/image44.png[image44,width=552,height=280] + +=== Visual Login Tool + +[arabic] +. Sign in with the `demo` user. +. Go to **Database Management**, select a database, click **More** in the **Actions** column, and choose **Login**. + +image::media/image45.jpeg[image45,width=65,height=205] + +[arabic, start=3] +. On the new page, enter the database account `sysdba@ivyo.com` and the database password, then click **Login**. + +image::media/image46.png[image46,width=383,height=263] + +[arabic, start=4] +. Once the connection is established, you can operate on the database. \ No newline at end of file diff --git a/EN/modules/ROOT/pages/master/4.7.adoc b/EN/modules/ROOT/pages/master/4.7.adoc new file mode 100644 index 0000000..e69de29 diff --git a/EN/modules/ROOT/pages/master/5.0.adoc b/EN/modules/ROOT/pages/master/5.0.adoc index 5687a32..1d51e2f 100644 --- a/EN/modules/ROOT/pages/master/5.0.adoc +++ b/EN/modules/ROOT/pages/master/5.0.adoc @@ -9,19 +9,19 @@ IvorySQL, as an advanced open-source database compatible with Oracle and based o + -[cols="2,1,3,3"] +[cols="1,2,1,3,3"] |==== -|*Plugin Name*|*Version*|*Function Description*|*Use Cases* -| xref:master/5.1.adoc[postgis] | 3.5.4 | Provides geospatial data support for IvorySQL, including spatial indexes, spatial functions, and geographic object storage | Geographic Information Systems (GIS), map services, location data analysis -| xref:master/5.2.adoc[pgvector] | 0.8.1 | Supports vector similarity search, can be used to store and retrieve high-dimensional vector data| AI applications, image retrieval, recommendation systems, semantic search -| xref:master/5.3.adoc[pgddl (DDL Extractor)] | 0.31 | Extracts DDL (Data Definition Language) statements from databases, facilitating version management and migration | Database version control, CI/CD integration, structure comparison and synchronization -| xref:master/5.4.adoc[pg_cron]​ | 1.6.0 | Provides database-internal scheduled task scheduling functionality, supports regular SQL statement execution | Data cleanup, regular statistics, automated maintenance tasks -| xref:master/5.5.adoc[pgsql-http]​ | 1.7.0 | Allows HTTP requests to be initiated in SQL, interacting with external web services | Data collection, API integration, microservice calls -| xref:master/5.6.adoc[plpgsql_check] | 2.8 | Provides static analysis functionality for PL/pgSQL code, can detect potential errors during development | Stored procedure development, code quality checking, debugging and optimization -| xref:master/5.7.adoc[pgroonga] | 4.0.4 | Provides full-text search functionality for non-English languages, meeting the needs of high-performance applications | Full-text search capabilities for languages like Chinese, Japanese, and Korean -| xref:master/5.8.adoc[pgaudit] | 18.0 | Provides fine-grained auditing, recording database operation logs to support security auditing and compliance checks | Database security auditing, compliance checks, audit report generation -| xref:master/5.9.adoc[pgrouting] | 3.8.0 | Provides routing computation for geospatial data, supporting multiple algorithms and data formats | Geospatial analysis, route planning, logistics optimization -| xref:master/5.10.adoc[system_stats] | 3.2 | Provide functions for accessing system-level statistics. | system monitor +|*Index*|*Plugin Name*|*Version*|*Function Description*|*Use Cases* +|*1*| xref:master/5.1.adoc[postgis] | 3.5.4 | Provides geospatial data support for IvorySQL, including spatial indexes, spatial functions, and geographic object storage | Geographic Information Systems (GIS), map services, location data analysis +|*2*| xref:master/5.2.adoc[pgvector] | 0.8.1 | Supports vector similarity search, can be used to store and retrieve high-dimensional vector data| AI applications, image retrieval, recommendation systems, semantic search +|*3*| xref:master/5.3.adoc[pgddl (DDL Extractor)] | 0.31 | Extracts DDL (Data Definition Language) statements from databases, facilitating version management and migration | Database version control, CI/CD integration, structure comparison and synchronization +|*4*| xref:master/5.4.adoc[pg_cron]​ | 1.6.0 | Provides database-internal scheduled task scheduling functionality, supports regular SQL statement execution | Data cleanup, regular statistics, automated maintenance tasks +|*5*| xref:master/5.5.adoc[pgsql-http]​ | 1.7.0 | Allows HTTP requests to be initiated in SQL, interacting with external web services | Data collection, API integration, microservice calls +|*6*| xref:master/5.6.adoc[plpgsql_check] | 2.8 | Provides static analysis functionality for PL/pgSQL code, can detect potential errors during development | Stored procedure development, code quality checking, debugging and optimization +|*7*| xref:master/5.7.adoc[pgroonga] | 4.0.4 | Provides full-text search functionality for non-English languages, meeting the needs of high-performance applications | Full-text search capabilities for languages like Chinese, Japanese, and Korean +|*8*| xref:master/5.8.adoc[pgaudit] | 18.0 | Provides fine-grained auditing, recording database operation logs to support security auditing and compliance checks | Database security auditing, compliance checks, audit report generation +|*9*| xref:master/5.9.adoc[pgrouting] | 3.8.0 | Provides routing computation for geospatial data, supporting multiple algorithms and data formats | Geospatial analysis, route planning, logistics optimization +|*10*| xref:master/5.10.adoc[system_stats] | 3.2 | Provide functions for accessing system-level statistics. | system monitor |==== These plugins have all been tested and adapted by the IvorySQL team to ensure stable operation in the IvorySQL environment. Users can select appropriate plugins based on business needs to further enhance the capabilities and flexibility of the database system. diff --git a/EN/modules/ROOT/pages/master/5.1.adoc b/EN/modules/ROOT/pages/master/5.1.adoc index be2d836..e458040 100644 --- a/EN/modules/ROOT/pages/master/5.1.adoc +++ b/EN/modules/ROOT/pages/master/5.1.adoc @@ -41,7 +41,7 @@ sudo apt install \ $ wget https://download.osgeo.org/postgis/source/postgis-3.5.4.tar.gz $ tar xvf postgis-3.5.4.tar.gz $ cd postgis-3.5.4 -$ ./configure --with-pgconfig=/path/to/pg_config eg: /opt/IvorySQL-5/bin/pg_config, if ivorysql installation directory is /opt/IvorySQL-5. +$ ./configure --with-pgconfig=/path/to/pg_config eg: /usr/ivory-5/bin/pg_config, if ivorysql installation directory is /usr/ivory-5. $ make $ sudo make install ---- diff --git a/EN/modules/ROOT/pages/master/5.2.adoc b/EN/modules/ROOT/pages/master/5.2.adoc index 3d9a162..ae51355 100644 --- a/EN/modules/ROOT/pages/master/5.2.adoc +++ b/EN/modules/ROOT/pages/master/5.2.adoc @@ -59,7 +59,7 @@ sudo --preserve-env=PG_CONFIG make install + [literal] ---- -[ivorysql@localhost ivorysql-4]$ psql +[ivorysql@localhost ivorysql-5]$ psql psql (18.0) Type "help" for help. @@ -119,7 +119,7 @@ NOTICE: [4,5,6] CALL ---- -==== FUNCTION +=== FUNCTION [literal] ---- ivorysql=# CREATE OR REPLACE FUNCTION AddVector(a vector(3), b vector(3)) diff --git a/EN/modules/ROOT/pages/master/6.3.12.adoc b/EN/modules/ROOT/pages/master/6.3.12.adoc index 0b73b26..6ee5244 100644 --- a/EN/modules/ROOT/pages/master/6.3.12.adoc +++ b/EN/modules/ROOT/pages/master/6.3.12.adoc @@ -136,7 +136,7 @@ typedef enum IvyStmtType { IVY_STMT_UNKNOW, IVY_STMT_DO, - IVY_STMT_DOFROMCALL, /* new statementt ype */ + IVY_STMT_DOFROMCALL, /* new statementt type */ IVY_STMT_DOHANDLED, IVY_STMT_OTHERS } IvyStmtType; diff --git a/EN/modules/ROOT/pages/master/6.3.9.adoc b/EN/modules/ROOT/pages/master/6.3.9.adoc index 3bba4ea..e27a633 100644 --- a/EN/modules/ROOT/pages/master/6.3.9.adoc +++ b/EN/modules/ROOT/pages/master/6.3.9.adoc @@ -11,7 +11,7 @@ To meet Oracle's quoted identifier case compatibility requirements, IvorySQL has == Implementation Details -If the parameter `-C` is appended during database initialization, with values of `normal/interchange/lowercase`, the `Intidb.c->main()` function in the code will process this parameter and set the global variable `caseswitchmode` according to the parameter value. Then the `initdb` command will start a `postgres` process in `-boot` mode to set up the `template1` template database, while passing the parameter `-C ivorysql.identifier_case_switch=caseswitchmode` to the new process. +If the parameter `-C` is appended during database initialization, with values of `normal/interchange/lowercase`, the `Initdb.c->main()` function in the code will process this parameter and set the global variable `caseswitchmode` according to the parameter value. Then the `initdb` command will start a `postgres` process in `-boot` mode to set up the `template1` template database, while passing the parameter `-C ivorysql.identifier_case_switch=caseswitchmode` to the new process. This newly started backend process will write the `identifier_case_switch` information to the `pg_control` file through the following code: diff --git a/EN/modules/ROOT/pages/master/7.4.adoc b/EN/modules/ROOT/pages/master/7.4.adoc index 48e0030..187468e 100644 --- a/EN/modules/ROOT/pages/master/7.4.adoc +++ b/EN/modules/ROOT/pages/master/7.4.adoc @@ -15,7 +15,7 @@ ==== == Function -- Initdb -m initialization requires judgment of different modes, among which oracle mode requires the execution of SQL statements postgres_oracle.bki. The default is Oracle compatibility mode, and the process is as follows: +- initdb -m initialization requires judgment of different modes, among which oracle mode requires the execution of SQL statements postgres_oracle.bki. The default is Oracle compatibility mode, and the process is as follows: - Startup: When starting, it determines whether it is an Oracle compatibility mode based on the initialization mode; diff --git a/EN/modules/ROOT/pages/master/7.5.adoc b/EN/modules/ROOT/pages/master/7.5.adoc index 4018f9d..437cb52 100644 --- a/EN/modules/ROOT/pages/master/7.5.adoc +++ b/EN/modules/ROOT/pages/master/7.5.adoc @@ -13,7 +13,7 @@ |==== |Database name|Like fuzzy queries |oracle|oracle's string type is varchar2, which supports fuzzy queries using the Like keyword with wildcards for columns of number, date, and string field types -|IvorySQL|The basic type of IvorySQL's string is text, so like is based on text, and other IvorySQL types can be implicitly converted to text, so that they can be automatically converted without creating opeartor +|IvorySQL|The basic type of IvorySQL's string is text, so like is based on text, and other IvorySQL types can be implicitly converted to text, so that they can be automatically converted without creating operator |==== == Test cases diff --git a/EN/modules/ROOT/pages/master/7.8.adoc b/EN/modules/ROOT/pages/master/7.8.adoc index d4f2f6a..8150ded 100644 --- a/EN/modules/ROOT/pages/master/7.8.adoc +++ b/EN/modules/ROOT/pages/master/7.8.adoc @@ -1050,7 +1050,7 @@ select uid() from dual; === `USERENV` function function: return the information of the current user environment, the test cases are as follows: -Check whether the current user is DBA, and if so, return ture: +Check whether the current user is DBA, and if so, return true: ``` select userenv('isdba')from dual; diff --git a/EN/modules/ROOT/pages/master/8.adoc b/EN/modules/ROOT/pages/master/8.adoc index 6a3e859..357d6cc 100644 --- a/EN/modules/ROOT/pages/master/8.adoc +++ b/EN/modules/ROOT/pages/master/8.adoc @@ -29,7 +29,7 @@ Our team is a continuously open team, focusing on parts of IvorySQL. In our team == **Contributor's Guide** -Before contributing, we need to know the current version of IvorySQL and the version of the document.At present, we maintain versions after version *4.5*. Our version follows the update pace of PG. Please update to the latest version before contributing. After that, we need to read the format requirements carefully and be familiar with code format, code comment format, issue format, pull PR title format, document contribution format, and article contribution format. These can help you become a contributor of IvorySQL as soon as possible. +Before contributing, we need to know the current version of IvorySQL and the version of the document.At present, we maintain versions after version *5.0*. Our version follows the update pace of PG. Please update to the latest version before contributing. After that, we need to read the format requirements carefully and be familiar with code format, code comment format, issue format, pull PR title format, document contribution format, and article contribution format. These can help you become a contributor of IvorySQL as soon as possible. === Preparation before Contribution diff --git a/EN/modules/ROOT/pages/master/cpu_arch_adp.adoc b/EN/modules/ROOT/pages/master/cpu_arch_adp.adoc index 75110bd..856c8a3 100644 --- a/EN/modules/ROOT/pages/master/cpu_arch_adp.adoc +++ b/EN/modules/ROOT/pages/master/cpu_arch_adp.adoc @@ -7,11 +7,9 @@ IvorySQL adapts the following CPU architectures: [cols="8h,~,~,~"] |==== -| Index | Architecture Name | Manufacture Name | Multi-platform Media Packages +| Index | Architecture Name | Adapt to brands | Multi-platform Media Packages | 1 | x86_64 | Intel, AMD, ZHAOXIN, HYGON | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.amd64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.x86_64.rpm[rpm] | 2 | aarch64 | Phytium, Kunpeng | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.arm64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.aarch64.rpm[rpm] -| 3 | mips64el| Loongson | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.rpm[rpm] -| 4 | loongarch64 | Loongson | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.rpm[rpm] -| 5 | ppc64le | IBM | N/A -| 6 | sw_64 | SUNWAY | N/A +| 3 | mips64el| Loongson3000,Loongson4000 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251120.mips64el.rpm[rpm] +| 4 | loongarch64 | Loongson5000 | https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.deb[deb], https://github.com/IvorySQL/IvorySQL/releases/download/IvorySQL_5.0/IvorySQL-5.0-9d890e9-20251118.loongarch64.rpm[rpm] |==== diff --git a/EN/modules/ROOT/pages/master/os_arch_adp.adoc b/EN/modules/ROOT/pages/master/os_arch_adp.adoc index f93f084..1e9e5b5 100644 --- a/EN/modules/ROOT/pages/master/os_arch_adp.adoc +++ b/EN/modules/ROOT/pages/master/os_arch_adp.adoc @@ -14,6 +14,6 @@ IvorySQL adapts following operating systems: | 2 | openKylin 2.0 SP1 | OpenAtom openKylin is an open-source project incubated and operated by the OpenAtom Foundation. It was co-founded by basic software and hardware enterprises, non-profit organizations, associations, institutions of higher education, scientific research institutions, and individual developers. With the community vision of "providing the world with an open-source operating system deeply integrated with artificial intelligence technology", the project aims to jointly build a world-leading root community for intelligent desktop open-source operating systems on the basis of openness, voluntariness, equality, and collaboration, and promote the prosperity and development of Linux open-source technology and its software and hardware ecosystem. | image:openKylin-2.0.png[width=80%,link={imagesdir}/openKylin-2.0.png] | 3 | OpenAnolis OS 23 | Anolis OS 23, the Longxin Operating System, is an operating system developed by the OpenAnolis community based on the Open Source Ecosystem Development and Cooperation Initiative. It independently selects components from upstream native communities, undergoes continuous evolution, and ensures compatibility and stability. As an enterprise-level operating system built on Linux Kernel 6.6 LTS, Anolis OS 23 relies on in-depth optimization of the ANCK 6.6 kernel and fully supports domestic chips such as Haiguang, Phytium, Loongson (LoongArch), and Zhaoxin, as well as general-purpose x86_64/ARM64 architectures. It features specialized enhancements for virtualization, security features, and performance optimization. Through hierarchical architecture design and intelligent tuning tools, it maximizes the performance of hardware-software collaboration. Meanwhile, it natively supports AI ecosystem components and provides secure AI container images, accelerating model development and inference deployment. In terms of development toolchains, it integrates GCC 12.3+/LLVM 17, Python 3.11, Rust, and more, enabling efficient multi-platform development. For the desktop ecosystem, it is compatible with GNOME and DDE desktop environments, and expands the ecosystem to meet diverse scenario needs by integrating the Linglong package manager. Anolis OS 23 supports various common applications and domestic applications, helping enterprises achieve efficient, secure, and reliable digital transformation. | image:OpenAnolis-23.jpg[width=80%,link={imagesdir}/OpenAnolis-23.jpg] -| 3 | deppin 20 | Deepin OS is a Linux distribution dedicated to providing global users with an elegant, user-friendly, secure, and reliable operating system. The official version of Deepin OS 20 (Build 1002) adopts a unified design style, with a redesigned desktop environment and applications, bringing a fresh visual experience. Its underlying repository has been upgraded to Debian 10.5, and the system installation adopts a dual-kernel mechanism (Kernel 5.4, Kernel 5.7), which comprehensively enhances system stability and compatibility. Additionally, it features a newly designed launcher menu, fingerprint recognition, and enhanced system security; some pre-installed applications in the system have been upgraded to the latest versions—all designed to deliver a better experience for you. | image:deepin-20.png[width=80%,link={imagesdir}/deepin-20.png] +| 4 | deppin 20 | Deepin OS is a Linux distribution dedicated to providing global users with an elegant, user-friendly, secure, and reliable operating system. The official version of Deepin OS 20 (Build 1002) adopts a unified design style, with a redesigned desktop environment and applications, bringing a fresh visual experience. Its underlying repository has been upgraded to Debian 10.5, and the system installation adopts a dual-kernel mechanism (Kernel 5.4, Kernel 5.7), which comprehensively enhances system stability and compatibility. Additionally, it features a newly designed launcher menu, fingerprint recognition, and enhanced system security; some pre-installed applications in the system have been upgraded to the latest versions—all designed to deliver a better experience for you. | image:deepin-20.png[width=80%,link={imagesdir}/deepin-20.png] |==== diff --git a/EN/modules/ROOT/pages/master/welcome.adoc b/EN/modules/ROOT/pages/master/welcome.adoc index 0df7866..1e5c9e9 100644 --- a/EN/modules/ROOT/pages/master/welcome.adoc +++ b/EN/modules/ROOT/pages/master/welcome.adoc @@ -15,4 +15,4 @@ IvorySQL project is an open source project proposed by Highgo Software to add th It is Apache licensed Open Source and always free to use. Any comments please contact support@ivorysql.org == Docs Download -https://docs.ivorysql.org/en/ivorysql-doc/v4.5/ivorysql.pdf[IvorySQL v4.5 pdf documentation] +https://docs.ivorysql.org/en/ivorysql-doc/v5.0/ivorysql.pdf[IvorySQL v5.0 pdf documentation] diff --git a/README_zh.md b/README_zh.md index c9e1bf1..4301e8a 100644 --- a/README_zh.md +++ b/README_zh.md @@ -100,7 +100,7 @@ antora -v 然后耐心等待,当成功运行结束后,你就可以到../demo 中查看自己生成的网页了。 -检查之后,你就可以开始提交[PR](https://github.com/IvorySQL/ivorysql_docs/blob/v4.5/CN/modules/ROOT/pages/v4.5/32.adoc),感谢您对社区的贡献^ _ ^,我们会在审核过后,考虑是否更新网站。 +检查之后,你就可以开始提交[PR](https://github.com/IvorySQL/ivorysql_docs/blob/v5.0/CN/modules/ROOT/pages/v5.0/32.adoc),感谢您对社区的贡献^ _ ^,我们会在审核过后,考虑是否更新网站。 ## Autobuild diff --git a/adoc_syntax_quick_reference.md b/adoc_syntax_quick_reference.md index 24de506..6b4c3a2 100644 --- a/adoc_syntax_quick_reference.md +++ b/adoc_syntax_quick_reference.md @@ -164,7 +164,7 @@ Some more text = Another top-level heading ``` -正确释放 +正确示范 ``` = Title @@ -223,7 +223,7 @@ Some text here Some more text here ``` -正确释放: +正确示范: ``` Some text here