diff --git a/README-zh.md b/README-zh.md
index f3adeeb9..60a37c6f 100644
--- a/README-zh.md
+++ b/README-zh.md
@@ -97,49 +97,49 @@ Docker部署的详情,请见[相关文档](https://tikazyq.github.io/crawlab/I
#### 登录
-
+
#### 首页
-
+
#### 节点列表
-
+
#### 节点拓扑图
-
+
#### 爬虫列表
-
+
#### 爬虫概览
-
+
#### 爬虫分析
-
+
#### 爬虫文件
-
+
#### 任务详情 - 抓取结果
-
+
#### 定时任务
-
+
## 架构
Crawlab的架构包括了一个主节点(Master Node)和多个工作节点(Worker Node),以及负责通信和数据储存的Redis和MongoDB数据库。
-
+
前端应用向主节点请求数据,主节点通过MongoDB和Redis来执行任务派发调度以及部署,工作节点收到任务之后,开始执行爬虫任务,并将任务结果储存到MongoDB。架构相对于`v0.3.0`之前的Celery版本有所精简,去除了不必要的节点监控模块Flower,节点监控主要由Redis完成。
diff --git a/README.md b/README.md
index d2aa361a..7ad849e5 100644
--- a/README.md
+++ b/README.md
@@ -95,49 +95,49 @@ For Docker Deployment details, please refer to [relevant documentation](https://
#### Login
-
+
#### Home Page
-
+
#### Node List
-
+
#### Node Network
-
+
#### Spider List
-
+
#### Spider Overview
-
+
#### Spider Analytics
-
+
#### Spider Files
-
+
#### Task Results
-
+
#### Cron Job
-
+
## Architecture
The architecture of Crawlab is consisted of the Master Node and multiple Worker Nodes, and Redis and MongoDB databases which are mainly for nodes communication and data storage.
-
+
The frontend app makes requests to the Master Node, which assigns tasks and deploys spiders through MongoDB and Redis. When a Worker Node receives a task, it begins to execute the crawling task, and stores the results to MongoDB. The architecture is much more concise compared with versions before `v0.3.0`. It has removed unnecessary Flower module which offers node monitoring services. They are now done by Redis.