Merge pull request #592 from crawlab-team/release

Release
This commit is contained in:
Marvin Zhang
2020-02-24 14:51:19 +08:00
committed by GitHub
621 changed files with 123479 additions and 5243 deletions

View File

@@ -2,4 +2,5 @@
logs
*.log
dist/
**/node_modules/
**/node_modules/
**/tmp/

62
.github/workflows/dockerpush.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
name: Docker
on:
push:
# Publish `master` as Docker `latest` image.
branches:
- master
- release
# Publish `v1.2.3` tags as releases.
tags:
- v*
env:
IMAGE_NAME: tikazyq/crawlab
jobs:
# Push image to GitHub Package Registry.
# See also https://docs.docker.com/docker-hub/builds/
push:
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v2
- name: Build image
run: docker build . --file Dockerfile --tag image
- name: Log into registry
run: echo ${{ secrets.DOCKER_PASSWORD}} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
- name: Push image
run: |
IMAGE_ID=$IMAGE_NAME
# Strip git ref prefix from version
VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
# Use Docker `latest` tag convention
[ "$VERSION" == "master" ] && VERSION=latest
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag image $IMAGE_ID:$VERSION
docker push $IMAGE_ID:$VERSION
- name: Deploy
run: |
# Strip git ref prefix from version
VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
if [ "$VERSION" == "release" ]; then
curl ${{ secrets.JENKINS_RELEASE_URL }}
fi

View File

@@ -1,3 +1,20 @@
# 0.4.7 (2020-02-24)
### 功能 / 优化
- **更好的支持 Scrapy**. 爬虫识别`settings.py` 配置日志级别选择爬虫选择. [#435](https://github.com/crawlab-team/crawlab/issues/435)
- **Git 同步**. 允许用户将 Git 项目同步到 Crawlab.
- **长任务支持**. 用户可以添加长任务爬虫这些爬虫可以跑长期运行的任务. [425](https://github.com/crawlab-team/crawlab/issues/425)
- **爬虫列表优化**. 分状态任务列数统计任务列表详情弹出框图例. [425](https://github.com/crawlab-team/crawlab/issues/425)
- **版本升级检测**. 检测最新版本通知用户升级.
- **批量操作爬虫**. 允许用户批量运行/停止爬虫任务以及批量删除爬虫.
- **复制爬虫**. 允许用户复制已存在爬虫来创建新爬虫.
- **微信群二维码**.
### Bug 修复
- **定时任务爬虫选择问题**. 字段不会随着爬虫变化而响应.
- **定时任务冲突问题**. 两个不同的爬虫设置定时任务时间设置成相同的话可能会有bug. [#515](https://github.com/crawlab-team/crawlab/issues/515) [#565](https://github.com/crawlab-team/crawlab/issues/565)
- **任务日志问题**. 在同一时间触发的不同任务可能会写入同一个日志文件. [#577](https://github.com/crawlab-team/crawlab/issues/577)
- **任务列表筛选选项不全**.
# 0.4.6 (2020-02-13)
### 功能 / 优化
- **Node.js SDK**. 用户可以将 SDK 应用到他们的 Node.js 爬虫中.

View File

@@ -1,3 +1,20 @@
# 0.4.7 (2020-02-24)
### Features / Enhancement
- **Better Support for Scrapy**. Spiders identification, `settings.py` configuration, log level selection, spider selection. [#435](https://github.com/crawlab-team/crawlab/issues/435)
- **Git Sync**. Allow users to sync git projects to Crawlab.
- **Long Task Support**. Users can add long-task spiders which is supposed to run without finishing. [#425](https://github.com/crawlab-team/crawlab/issues/425)
- **Spider List Optimization**. Tasks count by status, tasks detail popup, legend. [#425](https://github.com/crawlab-team/crawlab/issues/425)
- **Upgrade Check**. Check latest version and notifiy users to upgrade.
- **Spiders Batch Operation**. Allow users to run/stop spider tasks and delete spiders in batches.
- **Copy Spiders**. Allow users to copy an existing spider to create a new one.
- **Wechat Group QR Code**.
### Bug Fixes
- **Schedule Spider Selection Issue**. Fields not responding to spider change.
- **Cron Jobs Conflict**. Possible bug when two spiders set to the same time of their cron jobs. [#515](https://github.com/crawlab-team/crawlab/issues/515) [#565](https://github.com/crawlab-team/crawlab/issues/565)
- **Task Log Issue**. Different tasks write to the same log file if triggered at the same time. [#577](https://github.com/crawlab-team/crawlab/issues/577)
- **Task List Filter Options Incomplete**.
# 0.4.6 (2020-02-13)
### Features / Enhancement
- **SDK for Node.js**. Users can apply SDK in their Node.js spiders.

View File

@@ -45,7 +45,9 @@ RUN pip install scrapy pymongo bs4 requests crawlab-sdk scrapy-splash
ADD . /app
# copy backend files
COPY --from=backend-build /go/bin/crawlab /usr/local/bin
RUN mkdir -p /opt/bin
COPY --from=backend-build /go/bin/crawlab /opt/bin
RUN cp /opt/bin/crawlab /usr/local/bin/crawlab-server
# copy frontend files
COPY --from=frontend-build /app/dist /app/dist
@@ -57,6 +59,10 @@ WORKDIR /app/backend
# timezone environment
ENV TZ Asia/Shanghai
# language environment
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
# frontend port
EXPOSE 8080

View File

@@ -43,7 +43,9 @@ RUN pip install scrapy pymongo bs4 requests crawlab-sdk scrapy-splash -i https:/
ADD . /app
# copy backend files
COPY --from=backend-build /go/bin/crawlab /usr/local/bin
RUN mkdir -p /opt/bin
COPY --from=backend-build /go/bin/crawlab /opt/bin
RUN cp /opt/bin/crawlab /usr/local/bin/crawlab-server
# copy frontend files
COPY --from=frontend-build /app/dist /app/dist
@@ -55,6 +57,10 @@ WORKDIR /app/backend
# timezone environment
ENV TZ Asia/Shanghai
# language environment
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
# frontend port
EXPOSE 8080

View File

@@ -0,0 +1,9 @@
package constants
const (
String = "string"
Number = "number"
Boolean = "boolean"
Array = "array"
Object = "object"
)

View File

@@ -4,3 +4,14 @@ type SpiderType struct {
Type string `json:"type" bson:"_id"`
Count int `json:"count" bson:"count"`
}
type ScrapySettingParam struct {
Key string `json:"key"`
Value interface{} `json:"value"`
Type string `json:"type"`
}
type ScrapyItem struct {
Name string `json:"name"`
Fields []string `json:"fields"`
}

23
backend/entity/version.go Normal file
View File

@@ -0,0 +1,23 @@
package entity
type Release struct {
Name string `json:"name"`
Draft bool `json:"draft"`
PreRelease bool `json:"pre_release"`
PublishedAt string `json:"published_at"`
Body string `json:"body"`
}
type ReleaseSlices []Release
func (r ReleaseSlices) Len() int {
return len(r)
}
func (r ReleaseSlices) Less(i, j int) bool {
return r[i].PublishedAt < r[j].PublishedAt
}
func (r ReleaseSlices) Swap(i, j int) {
r[i], r[j] = r[j], r[i]
}

View File

@@ -3,6 +3,7 @@ module crawlab
go 1.12
require (
github.com/Unknwon/goconfig v0.0.0-20191126170842-860a72fb44fd
github.com/apex/log v1.1.1
github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/fsnotify/fsnotify v1.4.7
@@ -16,7 +17,7 @@ require (
github.com/matcornic/hermes v1.2.0
github.com/matcornic/hermes/v2 v2.0.2 // indirect
github.com/pkg/errors v0.8.1
github.com/royeo/dingrobot v1.0.0
github.com/royeo/dingrobot v1.0.0 // indirect
github.com/satori/go.uuid v1.2.0
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337
github.com/spf13/viper v1.4.0
@@ -24,5 +25,6 @@ require (
gopkg.in/go-playground/validator.v9 v9.29.1
gopkg.in/gomail.v2 v2.0.0-20150902115704-41f357289737
gopkg.in/russross/blackfriday.v2 v2.0.0 // indirect
gopkg.in/src-d/go-git.v4 v4.13.1
gopkg.in/yaml.v2 v2.2.2
)

View File

@@ -6,8 +6,12 @@ github.com/Masterminds/semver v1.4.2/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF0
github.com/Masterminds/sprig v2.16.0+incompatible h1:QZbMUPxRQ50EKAq3LFMnxddMu88/EUUG3qmxwtDmPsY=
github.com/Masterminds/sprig v2.16.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/Unknwon/goconfig v0.0.0-20191126170842-860a72fb44fd h1:+CYOsXi89xOqBkj7CuEJjA2It+j+R3ngUZEydr6mtkw=
github.com/Unknwon/goconfig v0.0.0-20191126170842-860a72fb44fd/go.mod h1:wngxua9XCNjvHjDiTiV26DaKDT+0c63QR6H5hjVUUxw=
github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7/go.mod h1:6zEj6s6u/ghQa61ZWa/C2Aw3RkjiTBOix7dkqa1VLIs=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/aokoli/goutils v1.0.1 h1:7fpzNGoJ3VA8qcrm++XEE1QUe0mIwNeLa02Nwq7RDkg=
github.com/aokoli/goutils v1.0.1/go.mod h1:SijmP0QR8LtwsmDs8Yii5Z/S4trXFGFC2oO5g9DP+DQ=
github.com/apex/log v1.1.1 h1:BwhRZ0qbjYtTob0I+2M+smavV0kOC8XgcnGZcyL9liA=
@@ -15,6 +19,7 @@ github.com/apex/log v1.1.1/go.mod h1:Ls949n1HFtXfbDcjiTTFQqkVUrte0puoIBfO3SVgwOA
github.com/aphistic/golf v0.0.0-20180712155816-02c07f170c5a/go.mod h1:3NqKYiepwy8kCu4PNA+aP7WUV72eXWJeP9/r3/K9aLE=
github.com/aphistic/sweet v0.2.0/go.mod h1:fWDlIh/isSE9n6EPsRmC0det+whmX6dJid3stzu0Xys=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/aws/aws-sdk-go v1.20.6/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aybabtme/rgbterm v0.0.0-20170906152045-cc83f3b3ce59/go.mod h1:q/89r3U2H7sSsE2t6Kca0lfwTK8JdoNGS/yzM/4iH5I=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
@@ -26,13 +31,17 @@ github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/emirpasic/gods v1.12.0 h1:QAUIPSaCu4G+POclxeqb3F+WPpdKqFGlw36+yOzGlrg=
github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@@ -40,6 +49,7 @@ github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3 h1:t8FVkw33L+wilf2
github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s=
github.com/gin-gonic/gin v1.4.0 h1:3tMoCCfM7ppqsR0ptz/wi1impNpT7/9wQtMZ8lr1mCQ=
github.com/gin-gonic/gin v1.4.0/go.mod h1:OW2EZn3DO8Ln9oIKOvM++LBO+5UPHJJDH72/q/3rZdM=
github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8 h1:DujepqpGd1hyOd7aW59XpK7Qymp8iy83xq74fLr21is=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
@@ -62,6 +72,7 @@ github.com/gomodule/redigo v2.0.0+incompatible h1:K/R+8tc58AaqLkqG2Ol3Qk+DR/TlNu
github.com/gomodule/redigo v2.0.0+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -82,6 +93,9 @@ github.com/imroc/req v0.2.4 h1:8XbvaQpERLAJV6as/cB186DtH5f0m5zAOtHEaTQ4ac0=
github.com/imroc/req v0.2.4/go.mod h1:J9FsaNHDTIVyW/b5r6/Df5qKEEEq2WzZKIgKSajd1AE=
github.com/jaytaylor/html2text v0.0.0-20180606194806-57d518f124b0 h1:xqgexXAGQgY3HAjNPSaCqn5Aahbo5TKsmhp8VRfr1iQ=
github.com/jaytaylor/html2text v0.0.0-20180606194806-57d518f124b0/go.mod h1:CVKlgaMiht+LXvHG173ujK6JUhZXKb2u/BQtjPDIvyk=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jpillora/backoff v0.0.0-20180909062703-3050d21c67d7/go.mod h1:2iMrUgbbvHEiQClaW2NsSzMyGHqN+rDFqY705q49KG0=
@@ -90,6 +104,8 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd h1:Coekwdh0v2wtGp9Gmz1Ze3eVRAWJMLokvN3QjdzCHLY=
github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -97,6 +113,7 @@ github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFB
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/leodido/go-urn v1.1.0 h1:Sm1gr51B1kKyfD2BlRcLSiEkffoG96g6TPv6eRoEiB8=
@@ -116,6 +133,8 @@ github.com/mattn/go-runewidth v0.0.3 h1:a+kO+98RDGEfo6asOGMmpodZq4FNtnGP54yps8Bz
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
@@ -128,6 +147,7 @@ github.com/olekukonko/tablewriter v0.0.1 h1:b3iUnf1v+ppJiOfNX4yxxqfWKMQPZR5yoh8u
github.com/olekukonko/tablewriter v0.0.1/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-buffruneio v0.2.0/go.mod h1:JkE26KsDizTr40EUHkXVtNPvgGtbSNq5BcowyYOWdKo=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -151,6 +171,7 @@ github.com/royeo/dingrobot v1.0.0/go.mod h1:RqDM8E/hySCVwI2aUFRJAUGDcHHRnIhzNmbN
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -174,10 +195,13 @@ github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.4.0 h1:yXHLWeravcrgGyFSyCgdYpXQ9dR9c/WED3pg1RhxqEU=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/src-d/gcfg v1.4.0 h1:xXbNR5AlLSA315x2UO+fTSSAXCDf+Ar38/6oyGbDKQ4=
github.com/src-d/gcfg v1.4.0/go.mod h1:p/UMsR43ujA89BJY9duynAwIpvqEujIH/jFlfL7jWoI=
github.com/ssor/bom v0.0.0-20170718123548-6386211fdfcf h1:pvbZ0lM0XWPBqUKqFU8cmavspvIl9nulOYwdy6IFRRo=
github.com/ssor/bom v0.0.0-20170718123548-6386211fdfcf/go.mod h1:RJID2RhlZKId02nZ62WenDCkgHFerpIOmW0iT7GKmXM=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
@@ -188,6 +212,8 @@ github.com/tj/go-spin v1.1.0/go.mod h1:Mg1mzmePZm4dva8Qz60H2lHwmJ2loum4VIrLgVnKw
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4 h1:j4s+tAvLfL3bZyefP2SEWmhBzmuIlH/eqNuPdFPgngw=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/xanzy/ssh-agent v0.2.1 h1:TCbipTQL2JiiCprBWx9frJ2eJlCYT00NmctrHxVAr70=
github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
@@ -196,9 +222,12 @@ go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029175232-7e6ffbd03851/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734 h1:p/H982KKEjUnLJkM3tt/LemDnOc1GiZL5FCVlORJ5zo=
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4 h1:HuIa8hRrWRSrqYzx1qI49NNxhdi2PrY7gxVSq1JjLDc=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -212,26 +241,36 @@ golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80 h1:Ao/3l156eZf2AW5wK8a7/smtodRU+gha3+BeqJ69lRk=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e h1:D5TXcfTk7xF7hvieo4QErS3qqCB4teTffacDWr7CI+0=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190729092621-ff9f1409240a/go.mod h1:jcCCGcm9btYwXyDqrUWc6MKQKKGJCWEQ3AfLSRIbEuI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -254,7 +293,14 @@ gopkg.in/gomail.v2 v2.0.0-20150902115704-41f357289737/go.mod h1:LRQQ+SO6ZHR7tOkp
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/russross/blackfriday.v2 v2.0.0 h1:+FlnIV8DSQnT7NZ43hcVKcdJdzZoeCmJj4Ql8gq5keA=
gopkg.in/russross/blackfriday.v2 v2.0.0/go.mod h1:6sSBNz/GtOm/pJTuh5UmBK2ZHfmnxGbl2NZg1UliSOI=
gopkg.in/src-d/go-billy.v4 v4.3.2 h1:0SQA1pRztfTFx2miS8sA97XvooFeNOmvUenF4o0EcVg=
gopkg.in/src-d/go-billy.v4 v4.3.2/go.mod h1:nDjArDMp+XMs1aFAESLRjfGSgfvoYN0hDfzEk0GjC98=
gopkg.in/src-d/go-git-fixtures.v3 v3.5.0/go.mod h1:dLBcvytrw/TYZsNTWCnkNF2DSIlzWYqTe3rJR56Ac7g=
gopkg.in/src-d/go-git.v4 v4.13.1 h1:SRtFyV8Kxc0UP7aCHcijOMQGPxHSmMOPrzulQWolkYE=
gopkg.in/src-d/go-git.v4 v4.13.1/go.mod h1:nx5NYcxdKxq5fpltdHnPa2Exj4Sx0EclMWZQbYDu2z8=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=

View File

@@ -133,7 +133,8 @@ func main() {
anonymousGroup.PUT("/users", routes.PutUser) // 添加用户
anonymousGroup.GET("/setting", routes.GetSetting) // 获取配置信息
// release版本
anonymousGroup.GET("/version", routes.GetVersion) // 获取发布的版本
anonymousGroup.GET("/version", routes.GetVersion) // 获取发布的版本
anonymousGroup.GET("/releases/latest", routes.GetLatestRelease) // 获取最近发布的版本
}
authGroup := app.Group("/", middlewares.AuthorizationMiddleware())
{
@@ -154,25 +155,39 @@ func main() {
}
// 爬虫
{
authGroup.GET("/spiders", routes.GetSpiderList) // 爬虫列表
authGroup.GET("/spiders/:id", routes.GetSpider) // 爬虫详情
authGroup.PUT("/spiders", routes.PutSpider) // 添加爬虫
authGroup.POST("/spiders", routes.UploadSpider) // 上传爬虫
authGroup.POST("/spiders/:id", routes.PostSpider) // 修改爬虫
authGroup.POST("/spiders/:id/publish", routes.PublishSpider) // 发布爬虫
authGroup.POST("/spiders/:id/upload", routes.UploadSpiderFromId) // 上传爬虫ID
authGroup.DELETE("/spiders/:id", routes.DeleteSpider) // 删除爬虫
authGroup.GET("/spiders/:id/tasks", routes.GetSpiderTasks) // 爬虫任务列表
authGroup.GET("/spiders/:id/file/tree", routes.GetSpiderFileTree) // 爬虫文件目录树读取
authGroup.GET("/spiders/:id/file", routes.GetSpiderFile) // 爬虫文件读取
authGroup.POST("/spiders/:id/file", routes.PostSpiderFile) // 爬虫文件更改
authGroup.PUT("/spiders/:id/file", routes.PutSpiderFile) // 爬虫文件创建
authGroup.PUT("/spiders/:id/dir", routes.PutSpiderDir) // 爬虫目录创建
authGroup.DELETE("/spiders/:id/file", routes.DeleteSpiderFile) // 爬虫文件删除
authGroup.POST("/spiders/:id/file/rename", routes.RenameSpiderFile) // 爬虫文件重命名
authGroup.GET("/spiders/:id/dir", routes.GetSpiderDir) // 爬虫目录
authGroup.GET("/spiders/:id/stats", routes.GetSpiderStats) // 爬虫统计数据
authGroup.GET("/spiders/:id/schedules", routes.GetSpiderSchedules) // 爬虫定时任务
authGroup.GET("/spiders", routes.GetSpiderList) // 爬虫列表
authGroup.GET("/spiders/:id", routes.GetSpider) // 爬虫详情
authGroup.PUT("/spiders", routes.PutSpider) // 添加爬虫
authGroup.POST("/spiders", routes.UploadSpider) // 上传爬虫
authGroup.POST("/spiders/:id", routes.PostSpider) // 修改爬虫
authGroup.POST("/spiders/:id/publish", routes.PublishSpider) // 发布爬虫
authGroup.POST("/spiders/:id/upload", routes.UploadSpiderFromId) // 上传爬虫ID
authGroup.DELETE("/spiders", routes.DeleteSelectedSpider) // 删除选择的爬虫
authGroup.DELETE("/spiders/:id", routes.DeleteSpider) // 删除爬虫
authGroup.POST("/spiders/:id/copy", routes.CopySpider) // 拷贝爬虫
authGroup.GET("/spiders/:id/tasks", routes.GetSpiderTasks) // 爬虫任务列表
authGroup.GET("/spiders/:id/file/tree", routes.GetSpiderFileTree) // 爬虫文件目录树读取
authGroup.GET("/spiders/:id/file", routes.GetSpiderFile) // 爬虫文件读取
authGroup.POST("/spiders/:id/file", routes.PostSpiderFile) // 爬虫文件更改
authGroup.PUT("/spiders/:id/file", routes.PutSpiderFile) // 爬虫文件创建
authGroup.PUT("/spiders/:id/dir", routes.PutSpiderDir) // 爬虫目录创建
authGroup.DELETE("/spiders/:id/file", routes.DeleteSpiderFile) // 爬虫文件删除
authGroup.POST("/spiders/:id/file/rename", routes.RenameSpiderFile) // 爬虫文件重命名
authGroup.GET("/spiders/:id/dir", routes.GetSpiderDir) // 爬虫目录
authGroup.GET("/spiders/:id/stats", routes.GetSpiderStats) // 爬虫统计数据
authGroup.GET("/spiders/:id/schedules", routes.GetSpiderSchedules) // 爬虫定时任务
authGroup.GET("/spiders/:id/scrapy/spiders", routes.GetSpiderScrapySpiders) // Scrapy 爬虫名称列表
authGroup.PUT("/spiders/:id/scrapy/spiders", routes.PutSpiderScrapySpiders) // Scrapy 爬虫创建爬虫
authGroup.GET("/spiders/:id/scrapy/settings", routes.GetSpiderScrapySettings) // Scrapy 爬虫设置
authGroup.POST("/spiders/:id/scrapy/settings", routes.PostSpiderScrapySettings) // Scrapy 爬虫修改设置
authGroup.GET("/spiders/:id/scrapy/items", routes.GetSpiderScrapyItems) // Scrapy 爬虫 items
authGroup.POST("/spiders/:id/scrapy/items", routes.PostSpiderScrapyItems) // Scrapy 爬虫修改 items
authGroup.GET("/spiders/:id/scrapy/pipelines", routes.GetSpiderScrapyPipelines) // Scrapy 爬虫 pipelines
authGroup.GET("/spiders/:id/scrapy/spider/filepath", routes.GetSpiderScrapySpiderFilepath) // Scrapy 爬虫 pipelines
authGroup.POST("/spiders/:id/git/sync", routes.PostSpiderSyncGit) // 爬虫 Git 同步
authGroup.POST("/spiders/:id/git/reset", routes.PostSpiderResetGit) // 爬虫 Git 重置
authGroup.POST("/spiders-cancel", routes.CancelSelectedSpider) // 停止所选爬虫任务
authGroup.POST("/spiders-run", routes.RunSelectedSpider) // 运行所选爬虫
}
// 可配置爬虫
{
@@ -190,8 +205,8 @@ func main() {
authGroup.GET("/tasks/:id", routes.GetTask) // 任务详情
authGroup.PUT("/tasks", routes.PutTask) // 派发任务
authGroup.DELETE("/tasks/:id", routes.DeleteTask) // 删除任务
authGroup.DELETE("/tasks_multiple", routes.DeleteMultipleTask) // 删除多个任务
authGroup.DELETE("/tasks_by_status", routes.DeleteTaskByStatus) //删除指定状态的任务
authGroup.DELETE("/tasks", routes.DeleteSelectedTask) // 删除多个任务
authGroup.DELETE("/tasks_by_status", routes.DeleteTaskByStatus) // 删除指定状态的任务
authGroup.POST("/tasks/:id/cancel", routes.CancelTask) // 取消任务
authGroup.GET("/tasks/:id/log", routes.GetTaskLog) // 任务日志
authGroup.GET("/tasks/:id/results", routes.GetTaskResults) // 任务结果
@@ -240,6 +255,9 @@ func main() {
authGroup.GET("/stats/home", routes.GetHomeStats) // 首页统计数据
// 文件
authGroup.GET("/file", routes.GetFile) // 获取文件
// Git
authGroup.GET("/git/branches", routes.GetGitBranches) // 获取 Git 分支
authGroup.GET("/git/public-key", routes.GetGitSshPublicKey) // 获取 SSH 公钥
}
}

View File

@@ -12,18 +12,20 @@ import (
)
type Schedule struct {
Id bson.ObjectId `json:"_id" bson:"_id"`
Name string `json:"name" bson:"name"`
Description string `json:"description" bson:"description"`
SpiderId bson.ObjectId `json:"spider_id" bson:"spider_id"`
Cron string `json:"cron" bson:"cron"`
EntryId cron.EntryID `json:"entry_id" bson:"entry_id"`
Param string `json:"param" bson:"param"`
RunType string `json:"run_type" bson:"run_type"`
NodeIds []bson.ObjectId `json:"node_ids" bson:"node_ids"`
Status string `json:"status" bson:"status"`
Enabled bool `json:"enabled" bson:"enabled"`
UserId bson.ObjectId `json:"user_id" bson:"user_id"`
Id bson.ObjectId `json:"_id" bson:"_id"`
Name string `json:"name" bson:"name"`
Description string `json:"description" bson:"description"`
SpiderId bson.ObjectId `json:"spider_id" bson:"spider_id"`
Cron string `json:"cron" bson:"cron"`
EntryId cron.EntryID `json:"entry_id" bson:"entry_id"`
Param string `json:"param" bson:"param"`
RunType string `json:"run_type" bson:"run_type"`
NodeIds []bson.ObjectId `json:"node_ids" bson:"node_ids"`
Status string `json:"status" bson:"status"`
Enabled bool `json:"enabled" bson:"enabled"`
UserId bson.ObjectId `json:"user_id" bson:"user_id"`
ScrapySpider string `json:"scrapy_spider" bson:"scrapy_spider"`
ScrapyLogLevel string `json:"scrapy_log_level" bson:"scrapy_log_level"`
// 前端展示
SpiderName string `json:"spider_name" bson:"spider_name"`

View File

@@ -37,13 +37,32 @@ type Spider struct {
// 自定义爬虫
Cmd string `json:"cmd" bson:"cmd"` // 执行命令
// Scrapy 爬虫(属于自定义爬虫)
IsScrapy bool `json:"is_scrapy" bson:"is_scrapy"` // 是否为 Scrapy 爬虫
SpiderNames []string `json:"spider_names" bson:"spider_names"` // 爬虫名称列表
// 可配置爬虫
Template string `json:"template" bson:"template"` // Spiderfile模版
// Git 设置
IsGit bool `json:"is_git" bson:"is_git"` // 是否为 Git
GitUrl string `json:"git_url" bson:"git_url"` // Git URL
GitBranch string `json:"git_branch" bson:"git_branch"` // Git 分支
GitHasCredential bool `json:"git_has_credential" bson:"git_has_credential"` // Git 是否加密
GitUsername string `json:"git_username" bson:"git_username"` // Git 用户名
GitPassword string `json:"git_password" bson:"git_password"` // Git 密码
GitAutoSync bool `json:"git_auto_sync" bson:"git_auto_sync"` // Git 是否自动同步
GitSyncFrequency string `json:"git_sync_frequency" bson:"git_sync_frequency"` // Git 同步频率
GitSyncError string `json:"git_sync_error" bson:"git_sync_error"` // Git 同步错误
// 长任务
IsLongTask bool `json:"is_long_task" bson:"is_long_task"` // 是否为长任务
// 前端展示
LastRunTs time.Time `json:"last_run_ts"` // 最后一次执行时间
LastStatus string `json:"last_status"` // 最后执行状态
Config entity.ConfigSpiderData `json:"config"` // 可配置爬虫配置
LastRunTs time.Time `json:"last_run_ts"` // 最后一次执行时间
LastStatus string `json:"last_status"` // 最后执行状态
Config entity.ConfigSpiderData `json:"config"` // 可配置爬虫配置
LatestTasks []Task `json:"latest_tasks"` // 最近任务列表
// 时间
CreateTs time.Time `json:"create_ts" bson:"create_ts"`
@@ -109,6 +128,18 @@ func (spider *Spider) GetLastTask() (Task, error) {
return tasks[0], nil
}
// 爬虫正在运行的任务
func (spider *Spider) GetLatestTasks(latestN int) (tasks []Task, err error) {
tasks, err = GetTaskList(bson.M{"spider_id": spider.Id}, 0, latestN, "-create_ts")
if err != nil {
return tasks, err
}
if tasks == nil {
return tasks, err
}
return tasks, nil
}
// 删除爬虫
func (spider *Spider) Delete() error {
s, c := database.GetCol("spiders")
@@ -142,9 +173,18 @@ func GetSpiderList(filter interface{}, skip int, limit int, sortStr string) ([]S
continue
}
// 获取正在运行的爬虫
latestTasks, err := spider.GetLatestTasks(50) // TODO: latestN 暂时写死,后面加入数据库
if err != nil {
log.Errorf(err.Error())
debug.PrintStack()
continue
}
// 赋值
spiders[i].LastRunTs = task.CreateTs
spiders[i].LastStatus = task.Status
spiders[i].LatestTasks = latestTasks
}
count, _ := c.Find(filter).Count()
@@ -152,6 +192,15 @@ func GetSpiderList(filter interface{}, skip int, limit int, sortStr string) ([]S
return spiders, count, nil
}
// 获取所有爬虫列表
func GetSpiderAllList(filter interface{}) (spiders []Spider, err error) {
spiders, _, err = GetSpiderList(filter, 0, constants.Infinite, "_id")
if err != nil {
return spiders, err
}
return spiders, nil
}
// 获取爬虫(根据FileId)
func GetSpiderByFileId(fileId bson.ObjectId) *Spider {
s, c := database.GetCol("spiders")
@@ -230,6 +279,8 @@ func RemoveSpider(id bson.ObjectId) error {
var result Spider
if err := c.FindId(id).One(&result); err != nil {
log.Errorf("find spider error: %s, id:%s", err.Error(), id.Hex())
debug.PrintStack()
return err
}
@@ -242,12 +293,10 @@ func RemoveSpider(id bson.ObjectId) error {
// gf上的文件
s, gf := database.GetGridFs("files")
defer s.Close()
if result.FileId.Hex() != constants.ObjectIdNull {
if err := gf.RemoveId(result.FileId); err != nil {
log.Error("remove file error, id:" + result.FileId.Hex())
debug.PrintStack()
return err
}
}

29
backend/routes/git.go Normal file
View File

@@ -0,0 +1,29 @@
package routes
import (
"crawlab/services"
"github.com/gin-gonic/gin"
"net/http"
)
func GetGitBranches(c *gin.Context) {
url := c.Query("url")
branches, err := services.GetGitBranches(url)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: branches,
})
}
func GetGitSshPublicKey(c *gin.Context) {
content := services.GetGitSshPublicKey()
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: content,
})
}

View File

@@ -26,6 +26,8 @@ import (
"time"
)
// ======== 爬虫管理 ========
func GetSpiderList(c *gin.Context) {
pageNum, _ := c.GetQuery("page_num")
pageSize, _ := c.GetQuery("page_size")
@@ -35,13 +37,23 @@ func GetSpiderList(c *gin.Context) {
sortKey, _ := c.GetQuery("sort_key")
sortDirection, _ := c.GetQuery("sort_direction")
// 筛选
// 筛选-名称
filter := bson.M{
"name": bson.M{"$regex": bson.RegEx{Pattern: keyword, Options: "im"}},
}
// 筛选-类型
if t != "" && t != "all" {
filter["type"] = t
}
// 筛选-是否为长任务
if t == "long-task" {
delete(filter, "type")
filter["is_long_task"] = true
}
// 筛选-项目
if pid == "" {
// do nothing
} else if pid == constants.ObjectIdNull {
@@ -88,15 +100,16 @@ func GetSpider(c *gin.Context) {
HandleErrorF(http.StatusBadRequest, c, "invalid id")
}
result, err := model.GetSpider(bson.ObjectIdHex(id))
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: result,
Data: spider,
})
}
@@ -118,6 +131,12 @@ func PostSpider(c *gin.Context) {
return
}
// 更新 GitCron
if err := services.GitCron.Update(); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
@@ -170,19 +189,33 @@ func PutSpider(c *gin.Context) {
// 将FileId置空
spider.FileId = bson.ObjectIdHex(constants.ObjectIdNull)
// 创建爬虫目录
// 爬虫目录
spiderDir := filepath.Join(viper.GetString("spider.path"), spider.Name)
// 赋值到爬虫实例
spider.Src = spiderDir
// 移除已有爬虫目录
if utils.Exists(spiderDir) {
if err := os.RemoveAll(spiderDir); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
}
// 生成爬虫目录
if err := os.MkdirAll(spiderDir, 0777); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
spider.Src = spiderDir
// 如果为 Scrapy 项目,生成 Scrapy 项目
if spider.IsScrapy {
if err := services.CreateScrapyProject(spider); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
}
// 添加爬虫到数据库
if err := spider.Add(); err != nil {
@@ -196,6 +229,12 @@ func PutSpider(c *gin.Context) {
return
}
// 更新 GitCron
if err := services.GitCron.Update(); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
@@ -203,6 +242,50 @@ func PutSpider(c *gin.Context) {
})
}
func CopySpider(c *gin.Context) {
type ReqBody struct {
Name string `json:"name"`
}
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "invalid id")
}
var reqBody ReqBody
if err := c.ShouldBindJSON(&reqBody); err != nil {
HandleError(http.StatusBadRequest, c, err)
return
}
// 检查新爬虫名称是否存在
// 如果存在,则返回错误
s := model.GetSpiderByName(reqBody.Name)
if s.Name != "" {
HandleErrorF(http.StatusBadRequest, c, fmt.Sprintf("spider name '%s' already exists", reqBody.Name))
return
}
// 被复制爬虫
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
// 复制爬虫
if err := services.CopySpider(spider, reqBody.Name); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func UploadSpider(c *gin.Context) {
// 从body中获取文件
uploadFile, err := c.FormFile("file")
@@ -433,12 +516,161 @@ func DeleteSpider(c *gin.Context) {
return
}
// 更新 GitCron
if err := services.GitCron.Update(); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func DeleteSelectedSpider(c *gin.Context) {
type ReqBody struct {
SpiderIds []string `json:"spider_ids"`
}
var reqBody ReqBody
if err := c.ShouldBindJSON(&reqBody); err != nil {
HandleErrorF(http.StatusBadRequest, c, "invalid request")
return
}
for _, spiderId := range reqBody.SpiderIds {
if err := services.RemoveSpider(spiderId); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
}
// 更新 GitCron
if err := services.GitCron.Update(); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func CancelSelectedSpider(c *gin.Context) {
type ReqBody struct {
SpiderIds []string `json:"spider_ids"`
}
var reqBody ReqBody
if err := c.ShouldBindJSON(&reqBody); err != nil {
HandleErrorF(http.StatusBadRequest, c, "invalid request")
return
}
for _, spiderId := range reqBody.SpiderIds {
if err := services.CancelSpider(spiderId); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
}
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func RunSelectedSpider(c *gin.Context) {
type TaskParam struct {
SpiderId bson.ObjectId `json:"spider_id"`
Param string `json:"param"`
}
type ReqBody struct {
RunType string `json:"run_type"`
NodeIds []bson.ObjectId `json:"node_ids"`
TaskParams []TaskParam `json:"task_params"`
}
var reqBody ReqBody
if err := c.ShouldBindJSON(&reqBody); err != nil {
HandleErrorF(http.StatusBadRequest, c, "invalid request")
return
}
// 任务ID
var taskIds []string
// 遍历爬虫
// TODO: 优化此部分代码,与 routes.PutTask 有重合部分
for _, taskParam := range reqBody.TaskParams {
if reqBody.RunType == constants.RunTypeAllNodes {
// 所有节点
nodes, err := model.GetNodeList(nil)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
for _, node := range nodes {
t := model.Task{
SpiderId: taskParam.SpiderId,
NodeId: node.Id,
Param: taskParam.Param,
UserId: services.GetCurrentUser(c).Id,
}
id, err := services.AddTask(t)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
taskIds = append(taskIds, id)
}
} else if reqBody.RunType == constants.RunTypeRandom {
// 随机
t := model.Task{
SpiderId: taskParam.SpiderId,
Param: taskParam.Param,
UserId: services.GetCurrentUser(c).Id,
}
id, err := services.AddTask(t)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
taskIds = append(taskIds, id)
} else if reqBody.RunType == constants.RunTypeSelectedNodes {
// 指定节点
for _, nodeId := range reqBody.NodeIds {
t := model.Task{
SpiderId: taskParam.SpiderId,
NodeId: nodeId,
Param: taskParam.Param,
UserId: services.GetCurrentUser(c).Id,
}
id, err := services.AddTask(t)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
taskIds = append(taskIds, id)
}
} else {
HandleErrorF(http.StatusInternalServerError, c, "invalid run_type")
return
}
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: taskIds,
})
}
func GetSpiderTasks(c *gin.Context) {
id := c.Param("id")
@@ -461,7 +693,151 @@ func GetSpiderTasks(c *gin.Context) {
})
}
// 爬虫文件管理
func GetSpiderStats(c *gin.Context) {
type Overview struct {
TaskCount int `json:"task_count" bson:"task_count"`
ResultCount int `json:"result_count" bson:"result_count"`
SuccessCount int `json:"success_count" bson:"success_count"`
SuccessRate float64 `json:"success_rate"`
TotalWaitDuration float64 `json:"wait_duration" bson:"wait_duration"`
TotalRuntimeDuration float64 `json:"runtime_duration" bson:"runtime_duration"`
AvgWaitDuration float64 `json:"avg_wait_duration"`
AvgRuntimeDuration float64 `json:"avg_runtime_duration"`
}
type Data struct {
Overview Overview `json:"overview"`
Daily []model.TaskDailyItem `json:"daily"`
}
id := c.Param("id")
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
log.Errorf(err.Error())
HandleError(http.StatusInternalServerError, c, err)
return
}
s, col := database.GetCol("tasks")
defer s.Close()
// 起始日期
startDate := time.Now().Add(-time.Hour * 24 * 30)
endDate := time.Now()
// match
op1 := bson.M{
"$match": bson.M{
"spider_id": spider.Id,
"create_ts": bson.M{
"$gte": startDate,
"$lt": endDate,
},
},
}
// project
op2 := bson.M{
"$project": bson.M{
"success_count": bson.M{
"$cond": []interface{}{
bson.M{
"$eq": []string{
"$status",
constants.StatusFinished,
},
},
1,
0,
},
},
"result_count": "$result_count",
"wait_duration": "$wait_duration",
"runtime_duration": "$runtime_duration",
},
}
// group
op3 := bson.M{
"$group": bson.M{
"_id": nil,
"task_count": bson.M{"$sum": 1},
"success_count": bson.M{"$sum": "$success_count"},
"result_count": bson.M{"$sum": "$result_count"},
"wait_duration": bson.M{"$sum": "$wait_duration"},
"runtime_duration": bson.M{"$sum": "$runtime_duration"},
},
}
// run aggregation pipeline
var overview Overview
if err := col.Pipe([]bson.M{op1, op2, op3}).One(&overview); err != nil {
if err == mgo.ErrNotFound {
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: Data{
Overview: overview,
Daily: []model.TaskDailyItem{},
},
})
return
}
log.Errorf(err.Error())
HandleError(http.StatusInternalServerError, c, err)
return
}
// 后续处理
successCount, _ := strconv.ParseFloat(strconv.Itoa(overview.SuccessCount), 64)
taskCount, _ := strconv.ParseFloat(strconv.Itoa(overview.TaskCount), 64)
overview.SuccessRate = successCount / taskCount
overview.AvgWaitDuration = overview.TotalWaitDuration / taskCount
overview.AvgRuntimeDuration = overview.TotalRuntimeDuration / taskCount
items, err := model.GetDailyTaskStats(bson.M{"spider_id": spider.Id})
if err != nil {
log.Errorf(err.Error())
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: Data{
Overview: overview,
Daily: items,
},
})
}
func GetSpiderSchedules(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
// 获取定时任务
list, err := model.GetScheduleList(bson.M{"spider_id": bson.ObjectIdHex(id)})
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: list,
})
}
// ======== ./爬虫管理 ========
// ======== 爬虫文件管理 ========
func GetSpiderDir(c *gin.Context) {
// 爬虫ID
@@ -760,127 +1136,11 @@ func RenameSpiderFile(c *gin.Context) {
})
}
func GetSpiderStats(c *gin.Context) {
type Overview struct {
TaskCount int `json:"task_count" bson:"task_count"`
ResultCount int `json:"result_count" bson:"result_count"`
SuccessCount int `json:"success_count" bson:"success_count"`
SuccessRate float64 `json:"success_rate"`
TotalWaitDuration float64 `json:"wait_duration" bson:"wait_duration"`
TotalRuntimeDuration float64 `json:"runtime_duration" bson:"runtime_duration"`
AvgWaitDuration float64 `json:"avg_wait_duration"`
AvgRuntimeDuration float64 `json:"avg_runtime_duration"`
}
// ======== 爬虫文件管理 ========
type Data struct {
Overview Overview `json:"overview"`
Daily []model.TaskDailyItem `json:"daily"`
}
// ======== Scrapy 部分 ========
id := c.Param("id")
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
log.Errorf(err.Error())
HandleError(http.StatusInternalServerError, c, err)
return
}
s, col := database.GetCol("tasks")
defer s.Close()
// 起始日期
startDate := time.Now().Add(-time.Hour * 24 * 30)
endDate := time.Now()
// match
op1 := bson.M{
"$match": bson.M{
"spider_id": spider.Id,
"create_ts": bson.M{
"$gte": startDate,
"$lt": endDate,
},
},
}
// project
op2 := bson.M{
"$project": bson.M{
"success_count": bson.M{
"$cond": []interface{}{
bson.M{
"$eq": []string{
"$status",
constants.StatusFinished,
},
},
1,
0,
},
},
"result_count": "$result_count",
"wait_duration": "$wait_duration",
"runtime_duration": "$runtime_duration",
},
}
// group
op3 := bson.M{
"$group": bson.M{
"_id": nil,
"task_count": bson.M{"$sum": 1},
"success_count": bson.M{"$sum": "$success_count"},
"result_count": bson.M{"$sum": "$result_count"},
"wait_duration": bson.M{"$sum": "$wait_duration"},
"runtime_duration": bson.M{"$sum": "$runtime_duration"},
},
}
// run aggregation pipeline
var overview Overview
if err := col.Pipe([]bson.M{op1, op2, op3}).One(&overview); err != nil {
if err == mgo.ErrNotFound {
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: Data{
Overview: overview,
Daily: []model.TaskDailyItem{},
},
})
return
}
log.Errorf(err.Error())
HandleError(http.StatusInternalServerError, c, err)
return
}
// 后续处理
successCount, _ := strconv.ParseFloat(strconv.Itoa(overview.SuccessCount), 64)
taskCount, _ := strconv.ParseFloat(strconv.Itoa(overview.TaskCount), 64)
overview.SuccessRate = successCount / taskCount
overview.AvgWaitDuration = overview.TotalWaitDuration / taskCount
overview.AvgRuntimeDuration = overview.TotalRuntimeDuration / taskCount
items, err := model.GetDailyTaskStats(bson.M{"spider_id": spider.Id})
if err != nil {
log.Errorf(err.Error())
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: Data{
Overview: overview,
Daily: items,
},
})
}
func GetSpiderSchedules(c *gin.Context) {
func GetSpiderScrapySpiders(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
@@ -888,8 +1148,13 @@ func GetSpiderSchedules(c *gin.Context) {
return
}
// 获取定时任务
list, err := model.GetScheduleList(bson.M{"spider_id": bson.ObjectIdHex(id)})
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
spiderNames, err := services.GetScrapySpiderNames(spider)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
@@ -898,6 +1163,275 @@ func GetSpiderSchedules(c *gin.Context) {
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: list,
Data: spiderNames,
})
}
func PutSpiderScrapySpiders(c *gin.Context) {
type ReqBody struct {
Name string `json:"name"`
Domain string `json:"domain"`
Template string `json:"template"`
}
id := c.Param("id")
var reqBody ReqBody
if err := c.ShouldBindJSON(&reqBody); err != nil {
HandleErrorF(http.StatusBadRequest, c, "invalid request")
return
}
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
if err := services.CreateScrapySpider(spider, reqBody.Name, reqBody.Domain, reqBody.Template); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func GetSpiderScrapySettings(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
data, err := services.GetScrapySettings(spider)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: data,
})
}
func PostSpiderScrapySettings(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
var reqData []entity.ScrapySettingParam
if err := c.ShouldBindJSON(&reqData); err != nil {
HandleErrorF(http.StatusBadRequest, c, "invalid request")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
if err := services.SaveScrapySettings(spider, reqData); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func GetSpiderScrapyItems(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
data, err := services.GetScrapyItems(spider)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: data,
})
}
func PostSpiderScrapyItems(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
var reqData []entity.ScrapyItem
if err := c.ShouldBindJSON(&reqData); err != nil {
HandleErrorF(http.StatusBadRequest, c, "invalid request")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
if err := services.SaveScrapyItems(spider, reqData); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func GetSpiderScrapyPipelines(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
data, err := services.GetScrapyPipelines(spider)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: data,
})
}
func GetSpiderScrapySpiderFilepath(c *gin.Context) {
id := c.Param("id")
spiderName := c.Query("spider_name")
if spiderName == "" {
HandleErrorF(http.StatusBadRequest, c, "spider_name is empty")
return
}
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
data, err := services.GetScrapySpiderFilepath(spider, spiderName)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: data,
})
}
// ======== ./Scrapy 部分 ========
// ======== Git 部分 ========
func PostSpiderSyncGit(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
if err := services.SyncSpiderGit(spider); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
func PostSpiderResetGit(c *gin.Context) {
id := c.Param("id")
if !bson.IsObjectIdHex(id) {
HandleErrorF(http.StatusBadRequest, c, "spider_id is invalid")
return
}
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
if err := services.ResetSpiderGit(spider); err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
})
}
// ======== ./Git 部分 ========

View File

@@ -118,7 +118,7 @@ func PutTask(c *gin.Context) {
UserId: services.GetCurrentUser(c).Id,
}
id, err := services.AddTask(t);
id, err := services.AddTask(t)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
@@ -133,7 +133,7 @@ func PutTask(c *gin.Context) {
Param: reqBody.Param,
UserId: services.GetCurrentUser(c).Id,
}
id, err := services.AddTask(t);
id, err := services.AddTask(t)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
@@ -149,7 +149,7 @@ func PutTask(c *gin.Context) {
UserId: services.GetCurrentUser(c).Id,
}
id, err := services.AddTask(t);
id, err := services.AddTask(t)
if err != nil {
HandleError(http.StatusInternalServerError, c, err)
return
@@ -183,7 +183,7 @@ func DeleteTaskByStatus(c *gin.Context) {
}
// 删除多个任务
func DeleteMultipleTask(c *gin.Context) {
func DeleteSelectedTask(c *gin.Context) {
ids := make(map[string][]string)
if err := c.ShouldBindJSON(&ids); err != nil {
HandleError(http.StatusInternalServerError, c, err)

22
backend/routes/version.go Normal file
View File

@@ -0,0 +1,22 @@
package routes
import (
"crawlab/services"
"github.com/apex/log"
"github.com/gin-gonic/gin"
"net/http"
"runtime/debug"
)
func GetLatestRelease(c *gin.Context) {
latestRelease, err := services.GetLatestRelease()
if err != nil {
log.Errorf(err.Error())
debug.PrintStack()
}
c.JSON(http.StatusOK, Response{
Status: "ok",
Message: "success",
Data: latestRelease,
})
}

324
backend/services/git.go Normal file
View File

@@ -0,0 +1,324 @@
package services
import (
"bytes"
"crawlab/lib/cron"
"crawlab/model"
"crawlab/services/spider_handler"
"crawlab/utils"
"fmt"
"github.com/apex/log"
"github.com/globalsign/mgo/bson"
"gopkg.in/src-d/go-git.v4"
"gopkg.in/src-d/go-git.v4/config"
"gopkg.in/src-d/go-git.v4/plumbing"
"gopkg.in/src-d/go-git.v4/plumbing/transport/ssh"
"io/ioutil"
"net/url"
"os"
"os/exec"
"path"
"regexp"
"runtime/debug"
"strings"
)
var GitCron *GitCronScheduler
type GitCronScheduler struct {
cron *cron.Cron
}
func SaveSpiderGitSyncError(s model.Spider, errMsg string) {
s, _ = model.GetSpider(s.Id)
s.GitSyncError = errMsg
if err := s.Save(); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return
}
}
func GetGitBranches(url string) (branches []string, err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("git", "ls-remote", url)
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return branches, err
}
for _, line := range strings.Split(stdout.String(), "\n") {
regex := regexp.MustCompile("refs/heads/(.*)$")
res := regex.FindStringSubmatch(line)
if len(res) > 1 {
branches = append(branches, res[1])
}
}
return branches, nil
}
func ResetSpiderGit(s model.Spider) (err error) {
// 删除文件夹
if err := os.RemoveAll(s.Src); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 创建空文件夹
if err := os.MkdirAll(s.Src, os.ModePerm); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 同步到GridFS
if err := UploadSpiderToGridFsFromMaster(s); err != nil {
return err
}
return nil
}
func SyncSpiderGit(s model.Spider) (err error) {
// 如果 .git 不存在,初始化一个仓库
if !utils.Exists(path.Join(s.Src, ".git")) {
_, err := git.PlainInit(s.Src, false)
if err != nil {
log.Error(err.Error())
debug.PrintStack()
return err
}
}
// 打开 repo
repo, err := git.PlainOpen(s.Src)
if err != nil {
log.Error(err.Error())
debug.PrintStack()
SaveSpiderGitSyncError(s, err.Error())
return err
}
// 生成 URL
gitUrl := s.GitUrl
if s.GitUsername != "" && s.GitPassword != "" {
u, err := url.Parse(s.GitUrl)
if err != nil {
SaveSpiderGitSyncError(s, err.Error())
return err
}
gitUrl = fmt.Sprintf(
"%s://%s:%s@%s%s",
u.Scheme,
s.GitUsername,
s.GitPassword,
u.Hostname(),
u.Path,
)
}
// 创建 remote
_ = repo.DeleteRemote("origin")
_, err = repo.CreateRemote(&config.RemoteConfig{
Name: "origin",
URLs: []string{gitUrl},
})
if err != nil {
log.Error(err.Error())
debug.PrintStack()
SaveSpiderGitSyncError(s, err.Error())
return err
}
// 生成验证信息
var auth ssh.AuthMethod
if !strings.HasPrefix(s.GitUrl, "http") {
// 为 SSH
regex := regexp.MustCompile("^(?:ssh://?)?([0-9a-zA-Z_]+)@")
res := regex.FindStringSubmatch(s.GitUrl)
username := s.GitUsername
if username == "" {
if len(res) > 1 {
username = res[1]
} else {
username = "git"
}
}
auth, err = ssh.NewPublicKeysFromFile(username, path.Join(os.Getenv("HOME"), ".ssh", "id_rsa"), "")
if err != nil {
log.Error(err.Error())
debug.PrintStack()
SaveSpiderGitSyncError(s, err.Error())
return err
}
}
// 获取 repo
_ = repo.Fetch(&git.FetchOptions{
RemoteName: "origin",
Force: true,
Auth: auth,
})
// 获得 WorkTree
wt, err := repo.Worktree()
if err != nil {
log.Error(err.Error())
debug.PrintStack()
SaveSpiderGitSyncError(s, err.Error())
return err
}
// 拉取 repo
if err := wt.Pull(&git.PullOptions{
RemoteName: "origin",
Auth: auth,
}); err != nil {
if err.Error() == "already up-to-date" {
// 检查是否为 Scrapy
sync := spider_handler.SpiderSync{Spider: s}
sync.CheckIsScrapy()
// 如果没有错误,则保存空字符串
SaveSpiderGitSyncError(s, "")
return nil
}
log.Error(err.Error())
debug.PrintStack()
SaveSpiderGitSyncError(s, err.Error())
return err
}
// 切换分支
if err := wt.Checkout(&git.CheckoutOptions{
Branch: plumbing.NewBranchReferenceName(s.GitBranch),
}); err != nil {
log.Error(err.Error())
debug.PrintStack()
SaveSpiderGitSyncError(s, err.Error())
return err
}
// 同步到GridFS
if err := UploadSpiderToGridFsFromMaster(s); err != nil {
SaveSpiderGitSyncError(s, err.Error())
return err
}
// 检查是否为 Scrapy
sync := spider_handler.SpiderSync{Spider: s}
sync.CheckIsScrapy()
// 如果没有错误,则保存空字符串
SaveSpiderGitSyncError(s, "")
return nil
}
func (g *GitCronScheduler) Start() error {
c := cron.New(cron.WithSeconds())
// 启动cron服务
g.cron.Start()
// 更新任务列表
if err := g.Update(); err != nil {
log.Errorf("update scheduler error: %s", err.Error())
debug.PrintStack()
return err
}
// 每30秒更新一次任务列表
spec := "*/30 * * * * *"
if _, err := c.AddFunc(spec, UpdateGitCron); err != nil {
log.Errorf("add func update schedulers error: %s", err.Error())
debug.PrintStack()
return err
}
return nil
}
func (g *GitCronScheduler) RemoveAll() {
entries := g.cron.Entries()
for i := 0; i < len(entries); i++ {
g.cron.Remove(entries[i].ID)
}
}
func (g *GitCronScheduler) Update() error {
// 删除所有定时任务
g.RemoveAll()
// 获取开启 Git 自动同步的爬虫
spiders, err := model.GetSpiderAllList(bson.M{"git_auto_sync": true})
if err != nil {
log.Errorf("get spider list error: %s", err.Error())
debug.PrintStack()
return err
}
// 遍历任务列表
for _, s := range spiders {
// 添加到定时任务
if err := g.AddJob(s); err != nil {
log.Errorf("add job error: %s, job: %s, cron: %s", err.Error(), s.Name, s.GitSyncFrequency)
debug.PrintStack()
return err
}
}
return nil
}
func (g *GitCronScheduler) AddJob(s model.Spider) error {
spec := s.GitSyncFrequency
// 添加定时任务
_, err := g.cron.AddFunc(spec, AddGitCronJob(s))
if err != nil {
log.Errorf("add func task error: %s", err.Error())
debug.PrintStack()
return err
}
return nil
}
func AddGitCronJob(s model.Spider) func() {
return func() {
if err := SyncSpiderGit(s); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return
}
}
}
func UpdateGitCron() {
if err := GitCron.Update(); err != nil {
log.Errorf(err.Error())
return
}
}
func GetGitSshPublicKey() string {
if !utils.Exists(path.Join(os.Getenv("HOME"), ".ssh")) ||
!utils.Exists(path.Join(os.Getenv("HOME"), ".ssh", "id_rsa")) ||
!utils.Exists(path.Join(os.Getenv("HOME"), ".ssh", "id_rsa.pub")) {
log.Errorf("no ssh public key")
debug.PrintStack()
return ""
}
content, err := ioutil.ReadFile(path.Join(os.Getenv("HOME"), ".ssh", "id_rsa.pub"))
if err != nil {
return ""
}
return string(content)
}

View File

@@ -22,6 +22,27 @@ func AddScheduleTask(s model.Schedule) func() {
// 生成任务ID
id := uuid.NewV4()
// 参数
var param string
// 爬虫
spider, err := model.GetSpider(s.SpiderId)
if err != nil {
return
}
// scrapy 爬虫
if spider.IsScrapy {
if s.ScrapySpider == "" {
log.Errorf("scrapy spider is not set")
debug.PrintStack()
return
}
param = s.ScrapySpider + " -L " + s.ScrapyLogLevel + " " + s.Param
} else {
param = s.Param
}
if s.RunType == constants.RunTypeAllNodes {
// 所有节点
nodes, err := model.GetNodeList(nil)
@@ -33,7 +54,7 @@ func AddScheduleTask(s model.Schedule) func() {
Id: id.String(),
SpiderId: s.SpiderId,
NodeId: node.Id,
Param: s.Param,
Param: param,
UserId: s.UserId,
}
@@ -46,7 +67,7 @@ func AddScheduleTask(s model.Schedule) func() {
t := model.Task{
Id: id.String(),
SpiderId: s.SpiderId,
Param: s.Param,
Param: param,
UserId: s.UserId,
}
if _, err := AddTask(t); err != nil {
@@ -61,7 +82,7 @@ func AddScheduleTask(s model.Schedule) func() {
Id: id.String(),
SpiderId: s.SpiderId,
NodeId: nodeId,
Param: s.Param,
Param: param,
UserId: s.UserId,
}

284
backend/services/scrapy.go Normal file
View File

@@ -0,0 +1,284 @@
package services
import (
"bytes"
"crawlab/constants"
"crawlab/entity"
"crawlab/model"
"encoding/json"
"fmt"
"github.com/Unknwon/goconfig"
"github.com/apex/log"
"io/ioutil"
"os"
"os/exec"
"path"
"runtime/debug"
"strconv"
"strings"
)
func GetScrapySpiderNames(s model.Spider) ([]string, error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("scrapy", "list")
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return []string{}, err
}
spiderNames := strings.Split(stdout.String(), "\n")
var res []string
for _, sn := range spiderNames {
if sn != "" {
res = append(res, sn)
}
}
return res, nil
}
func GetScrapySettings(s model.Spider) (res []map[string]interface{}, err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("crawlab", "settings")
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
log.Errorf(stderr.String())
debug.PrintStack()
return res, err
}
if err := json.Unmarshal([]byte(stdout.String()), &res); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return res, err
}
return res, nil
}
func SaveScrapySettings(s model.Spider, settingsData []entity.ScrapySettingParam) (err error) {
// 读取 scrapy.cfg
cfg, err := goconfig.LoadConfigFile(path.Join(s.Src, "scrapy.cfg"))
if err != nil {
return
}
modName, err := cfg.GetValue("settings", "default")
if err != nil {
return
}
// 定位到 settings.py 文件
arr := strings.Split(modName, ".")
dirName := arr[0]
fileName := arr[1]
filePath := fmt.Sprintf("%s/%s/%s.py", s.Src, dirName, fileName)
// 生成文件内容
content := ""
for _, param := range settingsData {
var line string
switch param.Type {
case constants.String:
line = fmt.Sprintf("%s = '%s'", param.Key, param.Value)
case constants.Number:
n := int64(param.Value.(float64))
s := strconv.FormatInt(n, 10)
line = fmt.Sprintf("%s = %s", param.Key, s)
case constants.Boolean:
if param.Value.(bool) {
line = fmt.Sprintf("%s = %s", param.Key, "True")
} else {
line = fmt.Sprintf("%s = %s", param.Key, "False")
}
case constants.Array:
arr := param.Value.([]interface{})
var arrStr []string
for _, s := range arr {
arrStr = append(arrStr, s.(string))
}
line = fmt.Sprintf("%s = ['%s']", param.Key, strings.Join(arrStr, "','"))
case constants.Object:
value := param.Value.(map[string]interface{})
var arr []string
for k, v := range value {
n := int64(v.(float64))
s := strconv.FormatInt(n, 10)
arr = append(arr, fmt.Sprintf("'%s': %s", k, s))
}
line = fmt.Sprintf("%s = {%s}", param.Key, strings.Join(arr, ","))
}
content += line + "\n"
}
// 写到 settings.py
if err := ioutil.WriteFile(filePath, []byte(content), os.ModePerm); err != nil {
return err
}
// 同步到GridFS
if err := UploadSpiderToGridFsFromMaster(s); err != nil {
return err
}
return
}
func GetScrapyItems(s model.Spider) (res []map[string]interface{}, err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("crawlab", "items")
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
log.Errorf(stderr.String())
debug.PrintStack()
return res, err
}
if err := json.Unmarshal([]byte(stdout.String()), &res); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return res, err
}
return res, nil
}
func SaveScrapyItems(s model.Spider, itemsData []entity.ScrapyItem) (err error) {
// 读取 scrapy.cfg
cfg, err := goconfig.LoadConfigFile(path.Join(s.Src, "scrapy.cfg"))
if err != nil {
return
}
modName, err := cfg.GetValue("settings", "default")
if err != nil {
return
}
// 定位到 settings.py 文件
arr := strings.Split(modName, ".")
dirName := arr[0]
fileName := "items"
filePath := fmt.Sprintf("%s/%s/%s.py", s.Src, dirName, fileName)
// 生成文件内容
content := ""
content += "import scrapy\n"
content += "\n\n"
for _, item := range itemsData {
content += fmt.Sprintf("class %s(scrapy.Item):\n", item.Name)
for _, field := range item.Fields {
content += fmt.Sprintf(" %s = scrapy.Field()\n", field)
}
content += "\n\n"
}
// 写到 settings.py
if err := ioutil.WriteFile(filePath, []byte(content), os.ModePerm); err != nil {
return err
}
// 同步到GridFS
if err := UploadSpiderToGridFsFromMaster(s); err != nil {
return err
}
return
}
func GetScrapyPipelines(s model.Spider) (res []string, err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("crawlab", "pipelines")
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
log.Errorf(stderr.String())
debug.PrintStack()
return res, err
}
if err := json.Unmarshal([]byte(stdout.String()), &res); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return res, err
}
return res, nil
}
func GetScrapySpiderFilepath(s model.Spider, spiderName string) (res string, err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("crawlab", "find_spider_filepath", "-n", spiderName)
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
log.Errorf(stderr.String())
debug.PrintStack()
return res, err
}
res = strings.Replace(stdout.String(), "\n", "", 1)
return res, nil
}
func CreateScrapySpider(s model.Spider, name string, domain string, template string) (err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("scrapy", "genspider", name, domain, "-t", template)
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
log.Errorf("stdout: " + stdout.String())
log.Errorf("stderr: " + stderr.String())
debug.PrintStack()
return err
}
return
}
func CreateScrapyProject(s model.Spider) (err error) {
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd := exec.Command("scrapy", "startproject", s.Name, s.Src)
cmd.Dir = s.Src
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Errorf(err.Error())
log.Errorf("stdout: " + stdout.String())
log.Errorf("stderr: " + stderr.String())
debug.PrintStack()
return err
}
return
}

View File

@@ -15,11 +15,13 @@ import (
"github.com/satori/go.uuid"
"github.com/spf13/viper"
"gopkg.in/yaml.v2"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime/debug"
"time"
)
type SpiderFileData struct {
@@ -83,6 +85,9 @@ func UploadSpiderToGridFsFromMaster(spider model.Spider) error {
// 生成MD5
spiderSync.CreateMd5File(gfFile2.Md5)
// 检查是否为 Scrapy 爬虫
spiderSync.CheckIsScrapy()
return nil
}
@@ -200,6 +205,7 @@ func PublishSpider(spider model.Spider) {
log.Infof("path not found: %s", path)
spiderSync.Download()
spiderSync.CreateMd5File(gfFile.Md5)
spiderSync.CheckIsScrapy()
return
}
// md5文件不存在则下载
@@ -257,15 +263,165 @@ func RemoveSpider(id string) error {
return nil
}
func CancelSpider(id string) error {
// 获取该爬虫
spider, err := model.GetSpider(bson.ObjectIdHex(id))
if err != nil {
return err
}
// 获取该爬虫待定或运行中的任务列表
query := bson.M{
"spider_id": spider.Id,
"status": bson.M{
"$in": []string{
constants.StatusPending,
constants.StatusRunning,
},
},
}
tasks, err := model.GetTaskList(query, 0, constants.Infinite, "-create_ts")
if err != nil {
return err
}
// 遍历任务列表,依次停止
for _, task := range tasks {
if err := CancelTask(task.Id); err != nil {
return err
}
}
return nil
}
func cloneGridFsFile(spider model.Spider, newName string) (err error) {
// 构造新爬虫
newSpider := spider
newSpider.Id = bson.NewObjectId()
newSpider.Name = newName
newSpider.DisplayName = newName
newSpider.Src = path.Join(path.Dir(spider.Src), newName)
newSpider.CreateTs = time.Now()
newSpider.UpdateTs = time.Now()
// GridFS连接实例
s, gf := database.GetGridFs("files")
defer s.Close()
// 被克隆爬虫的GridFS文件
f, err := gf.OpenId(spider.FileId)
if err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 新爬虫的GridFS文件
fNew, err := gf.Create(newSpider.Name + ".zip")
if err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 生成唯一ID
randomId := uuid.NewV4()
tmpPath := viper.GetString("other.tmppath")
if !utils.Exists(tmpPath) {
if err := os.MkdirAll(tmpPath, 0777); err != nil {
log.Errorf("mkdir other.tmppath error: %v", err.Error())
return err
}
}
// 创建临时文件
tmpFilePath := filepath.Join(tmpPath, randomId.String()+".zip")
tmpFile := utils.OpenFile(tmpFilePath)
// 拷贝到临时文件
if _, err := io.Copy(tmpFile, f); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 关闭临时文件
if err := tmpFile.Close(); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 读取内容
fContent, err := ioutil.ReadFile(tmpFilePath)
if err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 写入GridFS文件
if _, err := fNew.Write(fContent); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 关闭被克隆爬虫GridFS文件
if err = f.Close(); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 将新爬虫文件复制
newSpider.FileId = fNew.Id().(bson.ObjectId)
// 保存新爬虫
if err := newSpider.Add(); err != nil {
return err
}
// 关闭新爬虫GridFS文件
if err := fNew.Close(); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 删除临时文件
if err := os.RemoveAll(tmpFilePath); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return err
}
// 同步爬虫
PublishSpider(newSpider)
return nil
}
func CopySpider(spider model.Spider, newName string) error {
// 克隆GridFS文件
if err := cloneGridFsFile(spider, newName); err != nil {
return err
}
return nil
}
// 启动爬虫服务
func InitSpiderService() error {
// 构造定时任务执行器
c := cron.New(cron.WithSeconds())
if _, err := c.AddFunc("0 * * * * *", PublishAllSpiders); err != nil {
cPub := cron.New(cron.WithSeconds())
if _, err := cPub.AddFunc("0 * * * * *", PublishAllSpiders); err != nil {
return err
}
// 启动定时任务
c.Start()
cPub.Start()
if model.IsMaster() {
// 添加Demo爬虫
@@ -370,6 +526,16 @@ func InitSpiderService() error {
// 发布所有爬虫
PublishAllSpiders()
// 构造 Git 定时任务
GitCron = &GitCronScheduler{
cron: cron.New(cron.WithSeconds()),
}
// 启动 Git 定时任务
if err := GitCron.Start(); err != nil {
return err
}
}
return nil

View File

@@ -1,6 +1,7 @@
package spider_handler
import (
"crawlab/constants"
"crawlab/database"
"crawlab/model"
"crawlab/utils"
@@ -12,6 +13,7 @@ import (
"io"
"os"
"os/exec"
"path"
"path/filepath"
"runtime/debug"
)
@@ -39,10 +41,30 @@ func (s *SpiderSync) CreateMd5File(md5 string) {
}
}
func (s *SpiderSync) CheckIsScrapy() {
if s.Spider.Type == constants.Configurable {
return
}
s.Spider.IsScrapy = utils.Exists(path.Join(s.Spider.Src, "scrapy.cfg"))
// TODO: 暂时停用自动检测Scrapy项目功能
//if err := s.Spider.Save(); err != nil {
// log.Errorf(err.Error())
// debug.PrintStack()
// return
//}
}
func (s *SpiderSync) AfterRemoveDownCreate() {
if model.IsMaster() {
s.CheckIsScrapy()
}
}
func (s *SpiderSync) RemoveDownCreate(md5 string) {
s.RemoveSpiderFile()
s.Download()
s.CreateMd5File(md5)
s.AfterRemoveDownCreate()
}
// 获得下载锁的key

View File

@@ -243,7 +243,6 @@ func ExecuteShellCmd(cmdStr string, cwd string, t model.Task, s model.Spider) (e
if runtime.GOOS == constants.Windows {
cmd = exec.Command("cmd", "/C", cmdStr)
} else {
cmd = exec.Command("")
cmd = exec.Command("sh", "-c", cmdStr)
}
@@ -314,13 +313,13 @@ func MakeLogDir(t model.Task) (fileDir string, err error) {
}
// 获取日志文件路径
func GetLogFilePaths(fileDir string) (filePath string) {
func GetLogFilePaths(fileDir string, t model.Task) (filePath string) {
// 时间戳
ts := time.Now()
tsStr := ts.Format("20060102150405")
// stdout日志文件
filePath = filepath.Join(fileDir, tsStr+".log")
filePath = filepath.Join(fileDir, t.Id+"_"+tsStr+".log")
return filePath
}
@@ -351,10 +350,9 @@ func SaveTaskResultCount(id string) func() {
func ExecuteTask(id int) {
if flag, ok := LockList.Load(id); ok {
if flag.(bool) {
log.Debugf(GetWorkerPrefix(id) + "正在执行任务...")
log.Debugf(GetWorkerPrefix(id) + "running tasks...")
return
}
}
// 上锁
@@ -379,6 +377,7 @@ func ExecuteTask(id int) {
// 节点队列
queueCur := "tasks:node:" + node.Id.Hex()
// 节点队列任务
var msg string
if msg, err = database.RedisClient.LPop(queueCur); err != nil {
@@ -388,6 +387,7 @@ func ExecuteTask(id int) {
}
}
// 如果没有获取到任务,返回
if msg == "" {
return
}
@@ -419,7 +419,7 @@ func ExecuteTask(id int) {
return
}
// 获取日志文件路径
t.LogPath = GetLogFilePaths(fileDir)
t.LogPath = GetLogFilePaths(fileDir, t)
// 工作目录
cwd := filepath.Join(
@@ -505,6 +505,8 @@ func ExecuteTask(id int) {
log.Errorf(GetWorkerPrefix(id) + err.Error())
return
}
// 统计数据
t.Status = constants.StatusFinished // 任务状态: 已完成
t.FinishTs = time.Now() // 结束时间
t.RuntimeDuration = t.FinishTs.Sub(t.StartTs).Seconds() // 运行时长
@@ -534,7 +536,7 @@ func SpiderFileCheck(t model.Task, spider model.Spider) error {
// 判断爬虫文件是否存在
gfFile := model.GetGridFs(spider.FileId)
if gfFile == nil {
t.Error = "找不到爬虫文件,请重新上传"
t.Error = "cannot find spider files, please re-upload"
t.Status = constants.StatusError
t.FinishTs = time.Now() // 结束时间
t.RuntimeDuration = t.FinishTs.Sub(t.StartTs).Seconds() // 运行时长
@@ -569,7 +571,7 @@ func GetTaskLog(id string) (logStr string, err error) {
log.Errorf(err.Error())
}
fileP := GetLogFilePaths(fileDir)
fileP := GetLogFilePaths(fileDir, task)
// 获取日志文件路径
fLog, err := os.Create(fileP)
@@ -850,6 +852,14 @@ func SendNotifications(u model.User, t model.Task, s model.Spider) {
}
}
func UnlockLongTask(s model.Spider, n model.Node) {
if s.IsLongTask {
colName := "long-tasks"
key := fmt.Sprintf("%s:%s", s.Id.Hex(), n.Id.Hex())
_ = database.RedisClient.HDel(colName, key)
}
}
func InitTaskExecutor() error {
c := cron.New(cron.WithSeconds())
Exec = &Executor{

View File

@@ -0,0 +1,29 @@
package services
import (
"crawlab/entity"
"github.com/apex/log"
"github.com/imroc/req"
"runtime/debug"
"sort"
)
func GetLatestRelease() (release entity.Release, err error) {
res, err := req.Get("https://api.github.com/repos/crawlab-team/crawlab/releases")
if err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return release, err
}
var releaseDataList entity.ReleaseSlices
if err := res.ToJSON(&releaseDataList); err != nil {
log.Errorf(err.Error())
debug.PrintStack()
return release, err
}
sort.Sort(releaseDataList)
return releaseDataList[len(releaseDataList)-1], nil
}

View File

@@ -1,29 +1,33 @@
package utils
import (
"sync"
)
var TaskExecChanMap = NewChanMap()
type ChanMap struct {
m map[string]chan string
m sync.Map
}
func NewChanMap() *ChanMap {
return &ChanMap{m: make(map[string]chan string)}
return &ChanMap{m: sync.Map{}}
}
func (cm *ChanMap) Chan(key string) chan string {
if ch, ok := cm.m[key]; ok {
return ch
if ch, ok := cm.m.Load(key); ok {
return ch.(interface{}).(chan string)
}
ch := make(chan string, 10)
cm.m[key] = ch
cm.m.Store(key, ch)
return ch
}
func (cm *ChanMap) ChanBlocked(key string) chan string {
if ch, ok := cm.m[key]; ok {
return ch
if ch, ok := cm.m.Load(key); ok {
return ch.(interface{}).(chan string)
}
ch := make(chan string)
cm.m[key] = ch
cm.m.Store(key, ch)
return ch
}

View File

@@ -0,0 +1,3 @@
.DS_Store
*.iml
.idea

191
backend/vendor/github.com/Unknwon/goconfig/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

68
backend/vendor/github.com/Unknwon/goconfig/README.md generated vendored Normal file
View File

@@ -0,0 +1,68 @@
goconfig [![Go Walker](http://gowalker.org/api/v1/badge)](http://gowalker.org/github.com/Unknwon/goconfig)
========
[中文文档](README_ZH.md)
**IMPORTANT**
- This library is under bug fix only mode, which means no more features will be added.
- I'm continuing working on better Go code with a different library: [ini](https://github.com/go-ini/ini).
## About
Package goconfig is a easy-use, comments-support configuration file parser for the Go Programming Language, which provides a structure similar to what you would find on Microsoft Windows INI files.
The configuration file consists of sections, led by a `[section]` header and followed by `name:value` or `name=value` entries. Note that leading whitespace is removed from values. The optional values can contain format strings which refer to other values in the same section, or values in a special DEFAULT section. Comments are indicated by ";" or "#"; comments may begin anywhere on a single line.
## Features
- It simplified operation processes, easy to use and undersatnd; therefore, there are less chances to have errors.
- It uses exactly the same way to access a configuration file as you use Windows APIs, so you don't need to change your code style.
- It supports read recursion sections.
- It supports auto increment of key.
- It supports **READ** and **WRITE** configuration file with comments each section or key which all the other parsers don't support!!!!!!!
- It supports get value through type bool, float64, int, int64 and string, methods that start with "Must" means ignore errors and get zero-value if error occurs, or you can specify a default value.
- It's able to load multiple files to overwrite key values.
## Installation
go get github.com/unknwon/goconfig
## API Documentation
[Go Walker](http://gowalker.org/github.com/unknwon/goconfig).
## Example
Please see [conf.ini](testdata/conf.ini) as an example.
### Usage
- Function `LoadConfigFile` load file(s) depends on your situation, and return a variable with type `ConfigFile`.
- `GetValue` gives basic functionality of getting a value of given section and key.
- Methods like `Bool`, `Int`, `Int64` return corresponding type of values.
- Methods start with `Must` return corresponding type of values and returns zero-value of given type if something goes wrong.
- `SetValue` sets value to given section and key, and inserts somewhere if it does not exist.
- `DeleteKey` deletes by given section and key.
- Finally, `SaveConfigFile` saves your configuration to local file system.
- Use method `Reload` in case someone else modified your file(s).
- Methods contains `Comment` help you manipulate comments.
- `LoadFromReader` allows loading data without an intermediate file.
- `SaveConfigData` added, which writes configuration to an arbitrary writer.
- `ReloadData` allows to reload data from memory.
Note that you cannot mix in-memory configuration with on-disk configuration.
## More Information
- All characters are CASE SENSITIVE, BE CAREFUL!
## Credits
- [goconf](http://code.google.com/p/goconf/)
- [robfig/config](https://github.com/robfig/config)
- [Delete an item from a slice](https://groups.google.com/forum/?fromgroups=#!topic/golang-nuts/lYz8ftASMQ0)
## License
This project is under Apache v2 License. See the [LICENSE](LICENSE) file for the full license text.

View File

@@ -0,0 +1,64 @@
goconfig [![Build Status](https://drone.io/github.com/Unknwon/goconfig/status.png)](https://drone.io/github.com/Unknwon/goconfig/latest) [![Go Walker](http://gowalker.org/api/v1/badge)](http://gowalker.org/github.com/Unknwon/goconfig)
========
本库已被 [Go名库讲解](https://github.com/Unknwon/go-rock-libraries-showcases/tree/master/lectures/01-goconfig) 收录讲解,欢迎前往学习如何使用!
编码规范基于 [Go 编码规范](https://github.com/Unknwon/go-code-convention)
## 关于
goconfig 是一个易于使用支持注释的 Go 语言配置文件解析器该文件的书写格式和 Windows 下的 INI 文件一样
配置文件由形为 `[section]` 的节构成内部使用 `name:value` `name=value` 这样的键值对每行开头和尾部的空白符号都将被忽略如果未指定任何节则会默认放入名为 `DEFAULT` 的节当中可以使用 ; # 来作为注释的开头并可以放置于任意的单独一行中
## 特性
- 简化流程易于理解更少出错
- 提供与 Windows API 一模一样的操作方式
- 支持读取递归节
- 支持自增键名
- 支持对注释的 **** **** 操作其它所有解析器都不支持
- 可以直接返回 bool, float64, int, int64 string 类型的值如果使用 Must 开头的方法则一定会返回这个类型的一个值而不返回错误如果错误发生则会返回零值
- 支持加载多个文件来重写值
## 安装
go get github.com/Unknwon/goconfig
gopm get github.com/Unknwon/goconfig
## API 文档
[Go Walker](http://gowalker.org/github.com/Unknwon/goconfig).
## 示例
请查看 [conf.ini](testdata/conf.ini) 文件作为使用示例
### 用例
- 函数 `LoadConfigFile` 加载一个或多个文件然后返回一个类型为 `ConfigFile` 的变量
- `GetValue` 可以简单的获取某个值
- `Bool``Int``Int64` 这样的方法会直接返回指定类型的值
- `Must` 开头的方法不会返回错误但当错误发生时会返回零值
- `SetValue` 可以设置某个值
- `DeleteKey` 可以删除某个键
- 最后`SaveConfigFile` 可以保持您的配置到本地文件系统
- 使用方法 `Reload` 可以重载您的配置文件
## 更多信息
- 所有字符都是大小写敏感的
## 参考信息
- [goconf](http://code.google.com/p/goconf/)
- [robfig/config](https://github.com/robfig/config)
- [Delete an item from a slice](https://groups.google.com/forum/?fromgroups=#!topic/golang-nuts/lYz8ftASMQ0)
## 授权许可
本项目采用 Apache v2 开源授权许可证完整的授权说明已放置在 [LICENSE](LICENSE) 文件中

555
backend/vendor/github.com/Unknwon/goconfig/conf.go generated vendored Normal file
View File

@@ -0,0 +1,555 @@
// Copyright 2013 Unknwon
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
// Package goconfig is a fully functional and comments-support configuration file(.ini) parser.
package goconfig
import (
"fmt"
"regexp"
"runtime"
"strconv"
"strings"
"sync"
)
const (
// Default section name.
DEFAULT_SECTION = "DEFAULT"
// Maximum allowed depth when recursively substituing variable names.
_DEPTH_VALUES = 200
)
type ParseError int
const (
ERR_SECTION_NOT_FOUND ParseError = iota + 1
ERR_KEY_NOT_FOUND
ERR_BLANK_SECTION_NAME
ERR_COULD_NOT_PARSE
)
var LineBreak = "\n"
// Variable regexp pattern: %(variable)s
var varPattern = regexp.MustCompile(`%\(([^\)]+)\)s`)
func init() {
if runtime.GOOS == "windows" {
LineBreak = "\r\n"
}
}
// A ConfigFile represents a INI formar configuration file.
type ConfigFile struct {
lock sync.RWMutex // Go map is not safe.
fileNames []string // Support mutil-files.
data map[string]map[string]string // Section -> key : value
// Lists can keep sections and keys in order.
sectionList []string // Section name list.
keyList map[string][]string // Section -> Key name list
sectionComments map[string]string // Sections comments.
keyComments map[string]map[string]string // Keys comments.
BlockMode bool // Indicates whether use lock or not.
}
// newConfigFile creates an empty configuration representation.
func newConfigFile(fileNames []string) *ConfigFile {
c := new(ConfigFile)
c.fileNames = fileNames
c.data = make(map[string]map[string]string)
c.keyList = make(map[string][]string)
c.sectionComments = make(map[string]string)
c.keyComments = make(map[string]map[string]string)
c.BlockMode = true
return c
}
// SetValue adds a new section-key-value to the configuration.
// It returns true if the key and value were inserted,
// or returns false if the value was overwritten.
// If the section does not exist in advance, it will be created.
func (c *ConfigFile) SetValue(section, key, value string) bool {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if len(key) == 0 {
return false
}
if c.BlockMode {
c.lock.Lock()
defer c.lock.Unlock()
}
// Check if section exists.
if _, ok := c.data[section]; !ok {
// Execute add operation.
c.data[section] = make(map[string]string)
// Append section to list.
c.sectionList = append(c.sectionList, section)
}
// Check if key exists.
_, ok := c.data[section][key]
c.data[section][key] = value
if !ok {
// If not exists, append to key list.
c.keyList[section] = append(c.keyList[section], key)
}
return !ok
}
// DeleteKey deletes the key in given section.
// It returns true if the key was deleted,
// or returns false if the section or key didn't exist.
func (c *ConfigFile) DeleteKey(section, key string) bool {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if c.BlockMode {
c.lock.Lock()
defer c.lock.Unlock()
}
// Check if section exists.
if _, ok := c.data[section]; !ok {
return false
}
// Check if key exists.
if _, ok := c.data[section][key]; ok {
delete(c.data[section], key)
// Remove comments of key.
c.SetKeyComments(section, key, "")
// Get index of key.
i := 0
for _, keyName := range c.keyList[section] {
if keyName == key {
break
}
i++
}
// Remove from key list.
c.keyList[section] = append(c.keyList[section][:i], c.keyList[section][i+1:]...)
return true
}
return false
}
// GetValue returns the value of key available in the given section.
// If the value needs to be unfolded
// (see e.g. %(google)s example in the GoConfig_test.go),
// then String does this unfolding automatically, up to
// _DEPTH_VALUES number of iterations.
// It returns an error and empty string value if the section does not exist,
// or key does not exist in DEFAULT and current sections.
func (c *ConfigFile) GetValue(section, key string) (string, error) {
if c.BlockMode {
c.lock.RLock()
defer c.lock.RUnlock()
}
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
// Check if section exists
if _, ok := c.data[section]; !ok {
// Section does not exist.
return "", getError{ERR_SECTION_NOT_FOUND, section}
}
// Section exists.
// Check if key exists or empty value.
value, ok := c.data[section][key]
if !ok {
// Check if it is a sub-section.
if i := strings.LastIndex(section, "."); i > -1 {
return c.GetValue(section[:i], key)
}
// Return empty value.
return "", getError{ERR_KEY_NOT_FOUND, key}
}
// Key exists.
var i int
for i = 0; i < _DEPTH_VALUES; i++ {
vr := varPattern.FindString(value)
if len(vr) == 0 {
break
}
// Take off leading '%(' and trailing ')s'.
noption := strings.TrimLeft(vr, "%(")
noption = strings.TrimRight(noption, ")s")
// Search variable in default section.
nvalue, err := c.GetValue(DEFAULT_SECTION, noption)
if err != nil && section != DEFAULT_SECTION {
// Search in the same section.
if _, ok := c.data[section][noption]; ok {
nvalue = c.data[section][noption]
}
}
// Substitute by new value and take off leading '%(' and trailing ')s'.
value = strings.Replace(value, vr, nvalue, -1)
}
return value, nil
}
// Bool returns bool type value.
func (c *ConfigFile) Bool(section, key string) (bool, error) {
value, err := c.GetValue(section, key)
if err != nil {
return false, err
}
return strconv.ParseBool(value)
}
// Float64 returns float64 type value.
func (c *ConfigFile) Float64(section, key string) (float64, error) {
value, err := c.GetValue(section, key)
if err != nil {
return 0.0, err
}
return strconv.ParseFloat(value, 64)
}
// Int returns int type value.
func (c *ConfigFile) Int(section, key string) (int, error) {
value, err := c.GetValue(section, key)
if err != nil {
return 0, err
}
return strconv.Atoi(value)
}
// Int64 returns int64 type value.
func (c *ConfigFile) Int64(section, key string) (int64, error) {
value, err := c.GetValue(section, key)
if err != nil {
return 0, err
}
return strconv.ParseInt(value, 10, 64)
}
// MustValue always returns value without error.
// It returns empty string if error occurs, or the default value if given.
func (c *ConfigFile) MustValue(section, key string, defaultVal ...string) string {
val, err := c.GetValue(section, key)
if len(defaultVal) > 0 && (err != nil || len(val) == 0) {
return defaultVal[0]
}
return val
}
// MustValueSet always returns value without error,
// It returns empty string if error occurs, or the default value if given,
// and a bool value indicates whether default value is returned.
func (c *ConfigFile) MustValueSet(section, key string, defaultVal ...string) (string, bool) {
val, err := c.GetValue(section, key)
if len(defaultVal) > 0 && (err != nil || len(val) == 0) {
c.SetValue(section, key, defaultVal[0])
return defaultVal[0], true
}
return val, false
}
// MustValueRange always returns value without error,
// it returns default value if error occurs or doesn't fit into range.
func (c *ConfigFile) MustValueRange(section, key, defaultVal string, candidates []string) string {
val, err := c.GetValue(section, key)
if err != nil || len(val) == 0 {
return defaultVal
}
for _, cand := range candidates {
if val == cand {
return val
}
}
return defaultVal
}
// MustValueArray always returns value array without error,
// it returns empty array if error occurs, split by delimiter otherwise.
func (c *ConfigFile) MustValueArray(section, key, delim string) []string {
val, err := c.GetValue(section, key)
if err != nil || len(val) == 0 {
return []string{}
}
vals := strings.Split(val, delim)
for i := range vals {
vals[i] = strings.TrimSpace(vals[i])
}
return vals
}
// MustBool always returns value without error,
// it returns false if error occurs.
func (c *ConfigFile) MustBool(section, key string, defaultVal ...bool) bool {
val, err := c.Bool(section, key)
if len(defaultVal) > 0 && err != nil {
return defaultVal[0]
}
return val
}
// MustFloat64 always returns value without error,
// it returns 0.0 if error occurs.
func (c *ConfigFile) MustFloat64(section, key string, defaultVal ...float64) float64 {
value, err := c.Float64(section, key)
if len(defaultVal) > 0 && err != nil {
return defaultVal[0]
}
return value
}
// MustInt always returns value without error,
// it returns 0 if error occurs.
func (c *ConfigFile) MustInt(section, key string, defaultVal ...int) int {
value, err := c.Int(section, key)
if len(defaultVal) > 0 && err != nil {
return defaultVal[0]
}
return value
}
// MustInt64 always returns value without error,
// it returns 0 if error occurs.
func (c *ConfigFile) MustInt64(section, key string, defaultVal ...int64) int64 {
value, err := c.Int64(section, key)
if len(defaultVal) > 0 && err != nil {
return defaultVal[0]
}
return value
}
// GetSectionList returns the list of all sections
// in the same order in the file.
func (c *ConfigFile) GetSectionList() []string {
list := make([]string, len(c.sectionList))
copy(list, c.sectionList)
return list
}
// GetKeyList returns the list of all keys in give section
// in the same order in the file.
// It returns nil if given section does not exist.
func (c *ConfigFile) GetKeyList(section string) []string {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if c.BlockMode {
c.lock.RLock()
defer c.lock.RUnlock()
}
// Check if section exists.
if _, ok := c.data[section]; !ok {
return nil
}
// Non-default section has a blank key as section keeper.
list := make([]string, 0, len(c.keyList[section]))
for _, key := range c.keyList[section] {
if key != " " {
list = append(list, key)
}
}
return list
}
// DeleteSection deletes the entire section by given name.
// It returns true if the section was deleted, and false if the section didn't exist.
func (c *ConfigFile) DeleteSection(section string) bool {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if c.BlockMode {
c.lock.Lock()
defer c.lock.Unlock()
}
// Check if section exists.
if _, ok := c.data[section]; !ok {
return false
}
delete(c.data, section)
// Remove comments of section.
c.SetSectionComments(section, "")
// Get index of section.
i := 0
for _, secName := range c.sectionList {
if secName == section {
break
}
i++
}
// Remove from section and key list.
c.sectionList = append(c.sectionList[:i], c.sectionList[i+1:]...)
delete(c.keyList, section)
return true
}
// GetSection returns key-value pairs in given section.
// If section does not exist, returns nil and error.
func (c *ConfigFile) GetSection(section string) (map[string]string, error) {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if c.BlockMode {
c.lock.Lock()
defer c.lock.Unlock()
}
// Check if section exists.
if _, ok := c.data[section]; !ok {
// Section does not exist.
return nil, getError{ERR_SECTION_NOT_FOUND, section}
}
// Remove pre-defined key.
secMap := c.data[section]
delete(c.data[section], " ")
// Section exists.
return secMap, nil
}
// SetSectionComments adds new section comments to the configuration.
// If comments are empty(0 length), it will remove its section comments!
// It returns true if the comments were inserted or removed,
// or returns false if the comments were overwritten.
func (c *ConfigFile) SetSectionComments(section, comments string) bool {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if len(comments) == 0 {
if _, ok := c.sectionComments[section]; ok {
delete(c.sectionComments, section)
}
// Not exists can be seen as remove.
return true
}
// Check if comments exists.
_, ok := c.sectionComments[section]
if comments[0] != '#' && comments[0] != ';' {
comments = "; " + comments
}
c.sectionComments[section] = comments
return !ok
}
// SetKeyComments adds new section-key comments to the configuration.
// If comments are empty(0 length), it will remove its section-key comments!
// It returns true if the comments were inserted or removed,
// or returns false if the comments were overwritten.
// If the section does not exist in advance, it is created.
func (c *ConfigFile) SetKeyComments(section, key, comments string) bool {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
// Check if section exists.
if _, ok := c.keyComments[section]; ok {
if len(comments) == 0 {
if _, ok := c.keyComments[section][key]; ok {
delete(c.keyComments[section], key)
}
// Not exists can be seen as remove.
return true
}
} else {
if len(comments) == 0 {
// Not exists can be seen as remove.
return true
} else {
// Execute add operation.
c.keyComments[section] = make(map[string]string)
}
}
// Check if key exists.
_, ok := c.keyComments[section][key]
if comments[0] != '#' && comments[0] != ';' {
comments = "; " + comments
}
c.keyComments[section][key] = comments
return !ok
}
// GetSectionComments returns the comments in the given section.
// It returns an empty string(0 length) if the comments do not exist.
func (c *ConfigFile) GetSectionComments(section string) (comments string) {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
return c.sectionComments[section]
}
// GetKeyComments returns the comments of key in the given section.
// It returns an empty string(0 length) if the comments do not exist.
func (c *ConfigFile) GetKeyComments(section, key string) (comments string) {
// Blank section name represents DEFAULT section.
if len(section) == 0 {
section = DEFAULT_SECTION
}
if _, ok := c.keyComments[section]; ok {
return c.keyComments[section][key]
}
return ""
}
// getError occurs when get value in configuration file with invalid parameter.
type getError struct {
Reason ParseError
Name string
}
// Error implements Error interface.
func (err getError) Error() string {
switch err.Reason {
case ERR_SECTION_NOT_FOUND:
return fmt.Sprintf("section '%s' not found", err.Name)
case ERR_KEY_NOT_FOUND:
return fmt.Sprintf("key '%s' not found", err.Name)
}
return "invalid get error"
}

294
backend/vendor/github.com/Unknwon/goconfig/read.go generated vendored Normal file
View File

@@ -0,0 +1,294 @@
// Copyright 2013 Unknwon
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package goconfig
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"time"
)
// Read reads an io.Reader and returns a configuration representation.
// This representation can be queried with GetValue.
func (c *ConfigFile) read(reader io.Reader) (err error) {
buf := bufio.NewReader(reader)
// Handle BOM-UTF8.
// http://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding
mask, err := buf.Peek(3)
if err == nil && len(mask) >= 3 &&
mask[0] == 239 && mask[1] == 187 && mask[2] == 191 {
buf.Read(mask)
}
count := 1 // Counter for auto increment.
// Current section name.
section := DEFAULT_SECTION
var comments string
// Parse line-by-line
for {
line, err := buf.ReadString('\n')
line = strings.TrimSpace(line)
lineLengh := len(line) //[SWH|+]
if err != nil {
if err != io.EOF {
return err
}
// Reached end of file, if nothing to read then break,
// otherwise handle the last line.
if lineLengh == 0 {
break
}
}
// switch written for readability (not performance)
switch {
case lineLengh == 0: // Empty line
continue
case line[0] == '#' || line[0] == ';': // Comment
// Append comments
if len(comments) == 0 {
comments = line
} else {
comments += LineBreak + line
}
continue
case line[0] == '[' && line[lineLengh-1] == ']': // New section.
// Get section name.
section = strings.TrimSpace(line[1 : lineLengh-1])
// Set section comments and empty if it has comments.
if len(comments) > 0 {
c.SetSectionComments(section, comments)
comments = ""
}
// Make section exist even though it does not have any key.
c.SetValue(section, " ", " ")
// Reset counter.
count = 1
continue
case section == "": // No section defined so far
return readError{ERR_BLANK_SECTION_NAME, line}
default: // Other alternatives
var (
i int
keyQuote string
key string
valQuote string
value string
)
//[SWH|+]:支持引号包围起来的字串
if line[0] == '"' {
if lineLengh >= 6 && line[0:3] == `"""` {
keyQuote = `"""`
} else {
keyQuote = `"`
}
} else if line[0] == '`' {
keyQuote = "`"
}
if keyQuote != "" {
qLen := len(keyQuote)
pos := strings.Index(line[qLen:], keyQuote)
if pos == -1 {
return readError{ERR_COULD_NOT_PARSE, line}
}
pos = pos + qLen
i = strings.IndexAny(line[pos:], "=:")
if i <= 0 {
return readError{ERR_COULD_NOT_PARSE, line}
}
i = i + pos
key = line[qLen:pos] //保留引号内的两端的空格
} else {
i = strings.IndexAny(line, "=:")
if i <= 0 {
return readError{ERR_COULD_NOT_PARSE, line}
}
key = strings.TrimSpace(line[0:i])
}
//[SWH|+];
// Check if it needs auto increment.
if key == "-" {
key = "#" + fmt.Sprint(count)
count++
}
//[SWH|+]:支持引号包围起来的字串
lineRight := strings.TrimSpace(line[i+1:])
lineRightLength := len(lineRight)
firstChar := ""
if lineRightLength >= 2 {
firstChar = lineRight[0:1]
}
if firstChar == "`" {
valQuote = "`"
} else if lineRightLength >= 6 && lineRight[0:3] == `"""` {
valQuote = `"""`
}
if valQuote != "" {
qLen := len(valQuote)
pos := strings.LastIndex(lineRight[qLen:], valQuote)
if pos == -1 {
return readError{ERR_COULD_NOT_PARSE, line}
}
pos = pos + qLen
value = lineRight[qLen:pos]
} else {
value = strings.TrimSpace(lineRight[0:])
}
//[SWH|+];
c.SetValue(section, key, value)
// Set key comments and empty if it has comments.
if len(comments) > 0 {
c.SetKeyComments(section, key, comments)
comments = ""
}
}
// Reached end of file.
if err == io.EOF {
break
}
}
return nil
}
// LoadFromData accepts raw data directly from memory
// and returns a new configuration representation.
// Note that the configuration is written to the system
// temporary folder, so your file should not contain
// sensitive information.
func LoadFromData(data []byte) (c *ConfigFile, err error) {
// Save memory data to temporary file to support further operations.
tmpName := path.Join(os.TempDir(), "goconfig", fmt.Sprintf("%d", time.Now().Nanosecond()))
if err = os.MkdirAll(path.Dir(tmpName), os.ModePerm); err != nil {
return nil, err
}
if err = ioutil.WriteFile(tmpName, data, 0655); err != nil {
return nil, err
}
c = newConfigFile([]string{tmpName})
err = c.read(bytes.NewBuffer(data))
return c, err
}
// LoadFromReader accepts raw data directly from a reader
// and returns a new configuration representation.
// You must use ReloadData to reload.
// You cannot append files a configfile read this way.
func LoadFromReader(in io.Reader) (c *ConfigFile, err error) {
c = newConfigFile([]string{""})
err = c.read(in)
return c, err
}
func (c *ConfigFile) loadFile(fileName string) (err error) {
f, err := os.Open(fileName)
if err != nil {
return err
}
defer f.Close()
return c.read(f)
}
// LoadConfigFile reads a file and returns a new configuration representation.
// This representation can be queried with GetValue.
func LoadConfigFile(fileName string, moreFiles ...string) (c *ConfigFile, err error) {
// Append files' name together.
fileNames := make([]string, 1, len(moreFiles)+1)
fileNames[0] = fileName
if len(moreFiles) > 0 {
fileNames = append(fileNames, moreFiles...)
}
c = newConfigFile(fileNames)
for _, name := range fileNames {
if err = c.loadFile(name); err != nil {
return nil, err
}
}
return c, nil
}
// Reload reloads configuration file in case it has changes.
func (c *ConfigFile) Reload() (err error) {
var cfg *ConfigFile
if len(c.fileNames) == 1 {
if c.fileNames[0] == "" {
return fmt.Errorf("file opened from in-memory data, use ReloadData to reload")
}
cfg, err = LoadConfigFile(c.fileNames[0])
} else {
cfg, err = LoadConfigFile(c.fileNames[0], c.fileNames[1:]...)
}
if err == nil {
*c = *cfg
}
return err
}
// ReloadData reloads configuration file from memory
func (c *ConfigFile) ReloadData(in io.Reader) (err error) {
var cfg *ConfigFile
if len(c.fileNames) != 1 {
return fmt.Errorf("Multiple files loaded, unable to mix in-memory and file data")
}
cfg, err = LoadFromReader(in)
if err == nil {
*c = *cfg
}
return err
}
// AppendFiles appends more files to ConfigFile and reload automatically.
func (c *ConfigFile) AppendFiles(files ...string) error {
if len(c.fileNames) == 1 && c.fileNames[0] == "" {
return fmt.Errorf("Cannot append file data to in-memory data")
}
c.fileNames = append(c.fileNames, files...)
return c.Reload()
}
// readError occurs when read configuration file with wrong format.
type readError struct {
Reason ParseError
Content string // Line content
}
// Error implement Error interface.
func (err readError) Error() string {
switch err.Reason {
case ERR_BLANK_SECTION_NAME:
return "empty section name not allowed"
case ERR_COULD_NOT_PARSE:
return fmt.Sprintf("could not parse line: %s", string(err.Content))
}
return "invalid read error"
}

117
backend/vendor/github.com/Unknwon/goconfig/write.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
// Copyright 2013 Unknwon
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package goconfig
import (
"bytes"
"io"
"os"
"strings"
)
// Write spaces around "=" to look better.
var PrettyFormat = true
// SaveConfigData writes configuration to a writer
func SaveConfigData(c *ConfigFile, out io.Writer) (err error) {
equalSign := "="
if PrettyFormat {
equalSign = " = "
}
buf := bytes.NewBuffer(nil)
for _, section := range c.sectionList {
// Write section comments.
if len(c.GetSectionComments(section)) > 0 {
if _, err = buf.WriteString(c.GetSectionComments(section) + LineBreak); err != nil {
return err
}
}
if section != DEFAULT_SECTION {
// Write section name.
if _, err = buf.WriteString("[" + section + "]" + LineBreak); err != nil {
return err
}
}
for _, key := range c.keyList[section] {
if key != " " {
// Write key comments.
if len(c.GetKeyComments(section, key)) > 0 {
if _, err = buf.WriteString(c.GetKeyComments(section, key) + LineBreak); err != nil {
return err
}
}
keyName := key
// Check if it's auto increment.
if keyName[0] == '#' {
keyName = "-"
}
//[SWH|+]:支持键名包含等号和冒号
if strings.Contains(keyName, `=`) || strings.Contains(keyName, `:`) {
if strings.Contains(keyName, "`") {
if strings.Contains(keyName, `"`) {
keyName = `"""` + keyName + `"""`
} else {
keyName = `"` + keyName + `"`
}
} else {
keyName = "`" + keyName + "`"
}
}
value := c.data[section][key]
// In case key value contains "`" or "\"".
if strings.Contains(value, "`") {
if strings.Contains(value, `"`) {
value = `"""` + value + `"""`
} else {
value = `"` + value + `"`
}
}
// Write key and value.
if _, err = buf.WriteString(keyName + equalSign + value + LineBreak); err != nil {
return err
}
}
}
// Put a line between sections.
if _, err = buf.WriteString(LineBreak); err != nil {
return err
}
}
if _, err := buf.WriteTo(out); err != nil {
return err
}
return nil
}
// SaveConfigFile writes configuration file to local file system
func SaveConfigFile(c *ConfigFile, filename string) (err error) {
// Write configuration file by filename.
var f *os.File
if f, err = os.Create(filename); err != nil {
return err
}
if err := SaveConfigData(c, f); err != nil {
return err
}
return f.Close()
}

41
backend/vendor/github.com/emirpasic/gods/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,41 @@
Copyright (c) 2015, Emir Pasic
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-------------------------------------------------------------------------------
AVL Tree:
Copyright (c) 2017 Benjamin Scher Purcell <benjapurcell@gmail.com>
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,35 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package containers provides core interfaces and functions for data structures.
//
// Container is the base interface for all data structures to implement.
//
// Iterators provide stateful iterators.
//
// Enumerable provides Ruby inspired (each, select, map, find, any?, etc.) container functions.
//
// Serialization provides serializers (marshalers) and deserializers (unmarshalers).
package containers
import "github.com/emirpasic/gods/utils"
// Container is base interface that all data structures implement.
type Container interface {
Empty() bool
Size() int
Clear()
Values() []interface{}
}
// GetSortedValues returns sorted container's elements with respect to the passed comparator.
// Does not effect the ordering of elements within the container.
func GetSortedValues(container Container, comparator utils.Comparator) []interface{} {
values := container.Values()
if len(values) < 2 {
return values
}
utils.Sort(values, comparator)
return values
}

View File

@@ -0,0 +1,61 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package containers
// EnumerableWithIndex provides functions for ordered containers whose values can be fetched by an index.
type EnumerableWithIndex interface {
// Each calls the given function once for each element, passing that element's index and value.
Each(func(index int, value interface{}))
// Map invokes the given function once for each element and returns a
// container containing the values returned by the given function.
// TODO would appreciate help on how to enforce this in containers (don't want to type assert when chaining)
// Map(func(index int, value interface{}) interface{}) Container
// Select returns a new container containing all elements for which the given function returns a true value.
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
// Select(func(index int, value interface{}) bool) Container
// Any passes each element of the container to the given function and
// returns true if the function ever returns true for any element.
Any(func(index int, value interface{}) bool) bool
// All passes each element of the container to the given function and
// returns true if the function returns true for all elements.
All(func(index int, value interface{}) bool) bool
// Find passes each element of the container to the given function and returns
// the first (index,value) for which the function is true or -1,nil otherwise
// if no element matches the criteria.
Find(func(index int, value interface{}) bool) (int, interface{})
}
// EnumerableWithKey provides functions for ordered containers whose values whose elements are key/value pairs.
type EnumerableWithKey interface {
// Each calls the given function once for each element, passing that element's key and value.
Each(func(key interface{}, value interface{}))
// Map invokes the given function once for each element and returns a container
// containing the values returned by the given function as key/value pairs.
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
// Map(func(key interface{}, value interface{}) (interface{}, interface{})) Container
// Select returns a new container containing all elements for which the given function returns a true value.
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
// Select(func(key interface{}, value interface{}) bool) Container
// Any passes each element of the container to the given function and
// returns true if the function ever returns true for any element.
Any(func(key interface{}, value interface{}) bool) bool
// All passes each element of the container to the given function and
// returns true if the function returns true for all elements.
All(func(key interface{}, value interface{}) bool) bool
// Find passes each element of the container to the given function and returns
// the first (key,value) for which the function is true or nil,nil otherwise if no element
// matches the criteria.
Find(func(key interface{}, value interface{}) bool) (interface{}, interface{})
}

View File

@@ -0,0 +1,109 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package containers
// IteratorWithIndex is stateful iterator for ordered containers whose values can be fetched by an index.
type IteratorWithIndex interface {
// Next moves the iterator to the next element and returns true if there was a next element in the container.
// If Next() returns true, then next element's index and value can be retrieved by Index() and Value().
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
// Modifies the state of the iterator.
Next() bool
// Value returns the current element's value.
// Does not modify the state of the iterator.
Value() interface{}
// Index returns the current element's index.
// Does not modify the state of the iterator.
Index() int
// Begin resets the iterator to its initial state (one-before-first)
// Call Next() to fetch the first element if any.
Begin()
// First moves the iterator to the first element and returns true if there was a first element in the container.
// If First() returns true, then first element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
First() bool
}
// IteratorWithKey is a stateful iterator for ordered containers whose elements are key value pairs.
type IteratorWithKey interface {
// Next moves the iterator to the next element and returns true if there was a next element in the container.
// If Next() returns true, then next element's key and value can be retrieved by Key() and Value().
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
// Modifies the state of the iterator.
Next() bool
// Value returns the current element's value.
// Does not modify the state of the iterator.
Value() interface{}
// Key returns the current element's key.
// Does not modify the state of the iterator.
Key() interface{}
// Begin resets the iterator to its initial state (one-before-first)
// Call Next() to fetch the first element if any.
Begin()
// First moves the iterator to the first element and returns true if there was a first element in the container.
// If First() returns true, then first element's key and value can be retrieved by Key() and Value().
// Modifies the state of the iterator.
First() bool
}
// ReverseIteratorWithIndex is stateful iterator for ordered containers whose values can be fetched by an index.
//
// Essentially it is the same as IteratorWithIndex, but provides additional:
//
// Prev() function to enable traversal in reverse
//
// Last() function to move the iterator to the last element.
//
// End() function to move the iterator past the last element (one-past-the-end).
type ReverseIteratorWithIndex interface {
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
// If Prev() returns true, then previous element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
Prev() bool
// End moves the iterator past the last element (one-past-the-end).
// Call Prev() to fetch the last element if any.
End()
// Last moves the iterator to the last element and returns true if there was a last element in the container.
// If Last() returns true, then last element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
Last() bool
IteratorWithIndex
}
// ReverseIteratorWithKey is a stateful iterator for ordered containers whose elements are key value pairs.
//
// Essentially it is the same as IteratorWithKey, but provides additional:
//
// Prev() function to enable traversal in reverse
//
// Last() function to move the iterator to the last element.
type ReverseIteratorWithKey interface {
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
// If Prev() returns true, then previous element's key and value can be retrieved by Key() and Value().
// Modifies the state of the iterator.
Prev() bool
// End moves the iterator past the last element (one-past-the-end).
// Call Prev() to fetch the last element if any.
End()
// Last moves the iterator to the last element and returns true if there was a last element in the container.
// If Last() returns true, then last element's key and value can be retrieved by Key() and Value().
// Modifies the state of the iterator.
Last() bool
IteratorWithKey
}

View File

@@ -0,0 +1,17 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package containers
// JSONSerializer provides JSON serialization
type JSONSerializer interface {
// ToJSON outputs the JSON representation of containers's elements.
ToJSON() ([]byte, error)
}
// JSONDeserializer provides JSON deserialization
type JSONDeserializer interface {
// FromJSON populates containers's elements from the input JSON representation.
FromJSON([]byte) error
}

View File

@@ -0,0 +1,228 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package arraylist implements the array list.
//
// Structure is not thread safe.
//
// Reference: https://en.wikipedia.org/wiki/List_%28abstract_data_type%29
package arraylist
import (
"fmt"
"strings"
"github.com/emirpasic/gods/lists"
"github.com/emirpasic/gods/utils"
)
func assertListImplementation() {
var _ lists.List = (*List)(nil)
}
// List holds the elements in a slice
type List struct {
elements []interface{}
size int
}
const (
growthFactor = float32(2.0) // growth by 100%
shrinkFactor = float32(0.25) // shrink when size is 25% of capacity (0 means never shrink)
)
// New instantiates a new list and adds the passed values, if any, to the list
func New(values ...interface{}) *List {
list := &List{}
if len(values) > 0 {
list.Add(values...)
}
return list
}
// Add appends a value at the end of the list
func (list *List) Add(values ...interface{}) {
list.growBy(len(values))
for _, value := range values {
list.elements[list.size] = value
list.size++
}
}
// Get returns the element at index.
// Second return parameter is true if index is within bounds of the array and array is not empty, otherwise false.
func (list *List) Get(index int) (interface{}, bool) {
if !list.withinRange(index) {
return nil, false
}
return list.elements[index], true
}
// Remove removes the element at the given index from the list.
func (list *List) Remove(index int) {
if !list.withinRange(index) {
return
}
list.elements[index] = nil // cleanup reference
copy(list.elements[index:], list.elements[index+1:list.size]) // shift to the left by one (slow operation, need ways to optimize this)
list.size--
list.shrink()
}
// Contains checks if elements (one or more) are present in the set.
// All elements have to be present in the set for the method to return true.
// Performance time complexity of n^2.
// Returns true if no arguments are passed at all, i.e. set is always super-set of empty set.
func (list *List) Contains(values ...interface{}) bool {
for _, searchValue := range values {
found := false
for _, element := range list.elements {
if element == searchValue {
found = true
break
}
}
if !found {
return false
}
}
return true
}
// Values returns all elements in the list.
func (list *List) Values() []interface{} {
newElements := make([]interface{}, list.size, list.size)
copy(newElements, list.elements[:list.size])
return newElements
}
//IndexOf returns index of provided element
func (list *List) IndexOf(value interface{}) int {
if list.size == 0 {
return -1
}
for index, element := range list.elements {
if element == value {
return index
}
}
return -1
}
// Empty returns true if list does not contain any elements.
func (list *List) Empty() bool {
return list.size == 0
}
// Size returns number of elements within the list.
func (list *List) Size() int {
return list.size
}
// Clear removes all elements from the list.
func (list *List) Clear() {
list.size = 0
list.elements = []interface{}{}
}
// Sort sorts values (in-place) using.
func (list *List) Sort(comparator utils.Comparator) {
if len(list.elements) < 2 {
return
}
utils.Sort(list.elements[:list.size], comparator)
}
// Swap swaps the two values at the specified positions.
func (list *List) Swap(i, j int) {
if list.withinRange(i) && list.withinRange(j) {
list.elements[i], list.elements[j] = list.elements[j], list.elements[i]
}
}
// Insert inserts values at specified index position shifting the value at that position (if any) and any subsequent elements to the right.
// Does not do anything if position is negative or bigger than list's size
// Note: position equal to list's size is valid, i.e. append.
func (list *List) Insert(index int, values ...interface{}) {
if !list.withinRange(index) {
// Append
if index == list.size {
list.Add(values...)
}
return
}
l := len(values)
list.growBy(l)
list.size += l
copy(list.elements[index+l:], list.elements[index:list.size-l])
copy(list.elements[index:], values)
}
// Set the value at specified index
// Does not do anything if position is negative or bigger than list's size
// Note: position equal to list's size is valid, i.e. append.
func (list *List) Set(index int, value interface{}) {
if !list.withinRange(index) {
// Append
if index == list.size {
list.Add(value)
}
return
}
list.elements[index] = value
}
// String returns a string representation of container
func (list *List) String() string {
str := "ArrayList\n"
values := []string{}
for _, value := range list.elements[:list.size] {
values = append(values, fmt.Sprintf("%v", value))
}
str += strings.Join(values, ", ")
return str
}
// Check that the index is within bounds of the list
func (list *List) withinRange(index int) bool {
return index >= 0 && index < list.size
}
func (list *List) resize(cap int) {
newElements := make([]interface{}, cap, cap)
copy(newElements, list.elements)
list.elements = newElements
}
// Expand the array if necessary, i.e. capacity will be reached if we add n elements
func (list *List) growBy(n int) {
// When capacity is reached, grow by a factor of growthFactor and add number of elements
currentCapacity := cap(list.elements)
if list.size+n >= currentCapacity {
newCapacity := int(growthFactor * float32(currentCapacity+n))
list.resize(newCapacity)
}
}
// Shrink the array if necessary, i.e. when size is shrinkFactor percent of current capacity
func (list *List) shrink() {
if shrinkFactor == 0.0 {
return
}
// Shrink when size is at shrinkFactor * capacity
currentCapacity := cap(list.elements)
if list.size <= int(float32(currentCapacity)*shrinkFactor) {
list.resize(list.size)
}
}

View File

@@ -0,0 +1,79 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package arraylist
import "github.com/emirpasic/gods/containers"
func assertEnumerableImplementation() {
var _ containers.EnumerableWithIndex = (*List)(nil)
}
// Each calls the given function once for each element, passing that element's index and value.
func (list *List) Each(f func(index int, value interface{})) {
iterator := list.Iterator()
for iterator.Next() {
f(iterator.Index(), iterator.Value())
}
}
// Map invokes the given function once for each element and returns a
// container containing the values returned by the given function.
func (list *List) Map(f func(index int, value interface{}) interface{}) *List {
newList := &List{}
iterator := list.Iterator()
for iterator.Next() {
newList.Add(f(iterator.Index(), iterator.Value()))
}
return newList
}
// Select returns a new container containing all elements for which the given function returns a true value.
func (list *List) Select(f func(index int, value interface{}) bool) *List {
newList := &List{}
iterator := list.Iterator()
for iterator.Next() {
if f(iterator.Index(), iterator.Value()) {
newList.Add(iterator.Value())
}
}
return newList
}
// Any passes each element of the collection to the given function and
// returns true if the function ever returns true for any element.
func (list *List) Any(f func(index int, value interface{}) bool) bool {
iterator := list.Iterator()
for iterator.Next() {
if f(iterator.Index(), iterator.Value()) {
return true
}
}
return false
}
// All passes each element of the collection to the given function and
// returns true if the function returns true for all elements.
func (list *List) All(f func(index int, value interface{}) bool) bool {
iterator := list.Iterator()
for iterator.Next() {
if !f(iterator.Index(), iterator.Value()) {
return false
}
}
return true
}
// Find passes each element of the container to the given function and returns
// the first (index,value) for which the function is true or -1,nil otherwise
// if no element matches the criteria.
func (list *List) Find(f func(index int, value interface{}) bool) (int, interface{}) {
iterator := list.Iterator()
for iterator.Next() {
if f(iterator.Index(), iterator.Value()) {
return iterator.Index(), iterator.Value()
}
}
return -1, nil
}

View File

@@ -0,0 +1,83 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package arraylist
import "github.com/emirpasic/gods/containers"
func assertIteratorImplementation() {
var _ containers.ReverseIteratorWithIndex = (*Iterator)(nil)
}
// Iterator holding the iterator's state
type Iterator struct {
list *List
index int
}
// Iterator returns a stateful iterator whose values can be fetched by an index.
func (list *List) Iterator() Iterator {
return Iterator{list: list, index: -1}
}
// Next moves the iterator to the next element and returns true if there was a next element in the container.
// If Next() returns true, then next element's index and value can be retrieved by Index() and Value().
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
// Modifies the state of the iterator.
func (iterator *Iterator) Next() bool {
if iterator.index < iterator.list.size {
iterator.index++
}
return iterator.list.withinRange(iterator.index)
}
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
// If Prev() returns true, then previous element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
func (iterator *Iterator) Prev() bool {
if iterator.index >= 0 {
iterator.index--
}
return iterator.list.withinRange(iterator.index)
}
// Value returns the current element's value.
// Does not modify the state of the iterator.
func (iterator *Iterator) Value() interface{} {
return iterator.list.elements[iterator.index]
}
// Index returns the current element's index.
// Does not modify the state of the iterator.
func (iterator *Iterator) Index() int {
return iterator.index
}
// Begin resets the iterator to its initial state (one-before-first)
// Call Next() to fetch the first element if any.
func (iterator *Iterator) Begin() {
iterator.index = -1
}
// End moves the iterator past the last element (one-past-the-end).
// Call Prev() to fetch the last element if any.
func (iterator *Iterator) End() {
iterator.index = iterator.list.size
}
// First moves the iterator to the first element and returns true if there was a first element in the container.
// If First() returns true, then first element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
func (iterator *Iterator) First() bool {
iterator.Begin()
return iterator.Next()
}
// Last moves the iterator to the last element and returns true if there was a last element in the container.
// If Last() returns true, then last element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
func (iterator *Iterator) Last() bool {
iterator.End()
return iterator.Prev()
}

View File

@@ -0,0 +1,29 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package arraylist
import (
"encoding/json"
"github.com/emirpasic/gods/containers"
)
func assertSerializationImplementation() {
var _ containers.JSONSerializer = (*List)(nil)
var _ containers.JSONDeserializer = (*List)(nil)
}
// ToJSON outputs the JSON representation of list's elements.
func (list *List) ToJSON() ([]byte, error) {
return json.Marshal(list.elements[:list.size])
}
// FromJSON populates list's elements from the input JSON representation.
func (list *List) FromJSON(data []byte) error {
err := json.Unmarshal(data, &list.elements)
if err == nil {
list.size = len(list.elements)
}
return err
}

View File

@@ -0,0 +1,33 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package lists provides an abstract List interface.
//
// In computer science, a list or sequence is an abstract data type that represents an ordered sequence of values, where the same value may occur more than once. An instance of a list is a computer representation of the mathematical concept of a finite sequence; the (potentially) infinite analog of a list is a stream. Lists are a basic example of containers, as they contain other values. If the same value occurs multiple times, each occurrence is considered a distinct item.
//
// Reference: https://en.wikipedia.org/wiki/List_%28abstract_data_type%29
package lists
import (
"github.com/emirpasic/gods/containers"
"github.com/emirpasic/gods/utils"
)
// List interface that all lists implement
type List interface {
Get(index int) (interface{}, bool)
Remove(index int)
Add(values ...interface{})
Contains(values ...interface{}) bool
Sort(comparator utils.Comparator)
Swap(index1, index2 int)
Insert(index int, values ...interface{})
Set(index int, value interface{})
containers.Container
// Empty() bool
// Size() int
// Clear()
// Values() []interface{}
}

View File

@@ -0,0 +1,163 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package binaryheap implements a binary heap backed by array list.
//
// Comparator defines this heap as either min or max heap.
//
// Structure is not thread safe.
//
// References: http://en.wikipedia.org/wiki/Binary_heap
package binaryheap
import (
"fmt"
"github.com/emirpasic/gods/lists/arraylist"
"github.com/emirpasic/gods/trees"
"github.com/emirpasic/gods/utils"
"strings"
)
func assertTreeImplementation() {
var _ trees.Tree = (*Heap)(nil)
}
// Heap holds elements in an array-list
type Heap struct {
list *arraylist.List
Comparator utils.Comparator
}
// NewWith instantiates a new empty heap tree with the custom comparator.
func NewWith(comparator utils.Comparator) *Heap {
return &Heap{list: arraylist.New(), Comparator: comparator}
}
// NewWithIntComparator instantiates a new empty heap with the IntComparator, i.e. elements are of type int.
func NewWithIntComparator() *Heap {
return &Heap{list: arraylist.New(), Comparator: utils.IntComparator}
}
// NewWithStringComparator instantiates a new empty heap with the StringComparator, i.e. elements are of type string.
func NewWithStringComparator() *Heap {
return &Heap{list: arraylist.New(), Comparator: utils.StringComparator}
}
// Push adds a value onto the heap and bubbles it up accordingly.
func (heap *Heap) Push(values ...interface{}) {
if len(values) == 1 {
heap.list.Add(values[0])
heap.bubbleUp()
} else {
// Reference: https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap
for _, value := range values {
heap.list.Add(value)
}
size := heap.list.Size()/2 + 1
for i := size; i >= 0; i-- {
heap.bubbleDownIndex(i)
}
}
}
// Pop removes top element on heap and returns it, or nil if heap is empty.
// Second return parameter is true, unless the heap was empty and there was nothing to pop.
func (heap *Heap) Pop() (value interface{}, ok bool) {
value, ok = heap.list.Get(0)
if !ok {
return
}
lastIndex := heap.list.Size() - 1
heap.list.Swap(0, lastIndex)
heap.list.Remove(lastIndex)
heap.bubbleDown()
return
}
// Peek returns top element on the heap without removing it, or nil if heap is empty.
// Second return parameter is true, unless the heap was empty and there was nothing to peek.
func (heap *Heap) Peek() (value interface{}, ok bool) {
return heap.list.Get(0)
}
// Empty returns true if heap does not contain any elements.
func (heap *Heap) Empty() bool {
return heap.list.Empty()
}
// Size returns number of elements within the heap.
func (heap *Heap) Size() int {
return heap.list.Size()
}
// Clear removes all elements from the heap.
func (heap *Heap) Clear() {
heap.list.Clear()
}
// Values returns all elements in the heap.
func (heap *Heap) Values() []interface{} {
return heap.list.Values()
}
// String returns a string representation of container
func (heap *Heap) String() string {
str := "BinaryHeap\n"
values := []string{}
for _, value := range heap.list.Values() {
values = append(values, fmt.Sprintf("%v", value))
}
str += strings.Join(values, ", ")
return str
}
// Performs the "bubble down" operation. This is to place the element that is at the root
// of the heap in its correct place so that the heap maintains the min/max-heap order property.
func (heap *Heap) bubbleDown() {
heap.bubbleDownIndex(0)
}
// Performs the "bubble down" operation. This is to place the element that is at the index
// of the heap in its correct place so that the heap maintains the min/max-heap order property.
func (heap *Heap) bubbleDownIndex(index int) {
size := heap.list.Size()
for leftIndex := index<<1 + 1; leftIndex < size; leftIndex = index<<1 + 1 {
rightIndex := index<<1 + 2
smallerIndex := leftIndex
leftValue, _ := heap.list.Get(leftIndex)
rightValue, _ := heap.list.Get(rightIndex)
if rightIndex < size && heap.Comparator(leftValue, rightValue) > 0 {
smallerIndex = rightIndex
}
indexValue, _ := heap.list.Get(index)
smallerValue, _ := heap.list.Get(smallerIndex)
if heap.Comparator(indexValue, smallerValue) > 0 {
heap.list.Swap(index, smallerIndex)
} else {
break
}
index = smallerIndex
}
}
// Performs the "bubble up" operation. This is to place a newly inserted
// element (i.e. last element in the list) in its correct place so that
// the heap maintains the min/max-heap order property.
func (heap *Heap) bubbleUp() {
index := heap.list.Size() - 1
for parentIndex := (index - 1) >> 1; index > 0; parentIndex = (index - 1) >> 1 {
indexValue, _ := heap.list.Get(index)
parentValue, _ := heap.list.Get(parentIndex)
if heap.Comparator(parentValue, indexValue) <= 0 {
break
}
heap.list.Swap(index, parentIndex)
index = parentIndex
}
}
// Check that the index is within bounds of the list
func (heap *Heap) withinRange(index int) bool {
return index >= 0 && index < heap.list.Size()
}

View File

@@ -0,0 +1,84 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package binaryheap
import "github.com/emirpasic/gods/containers"
func assertIteratorImplementation() {
var _ containers.ReverseIteratorWithIndex = (*Iterator)(nil)
}
// Iterator returns a stateful iterator whose values can be fetched by an index.
type Iterator struct {
heap *Heap
index int
}
// Iterator returns a stateful iterator whose values can be fetched by an index.
func (heap *Heap) Iterator() Iterator {
return Iterator{heap: heap, index: -1}
}
// Next moves the iterator to the next element and returns true if there was a next element in the container.
// If Next() returns true, then next element's index and value can be retrieved by Index() and Value().
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
// Modifies the state of the iterator.
func (iterator *Iterator) Next() bool {
if iterator.index < iterator.heap.Size() {
iterator.index++
}
return iterator.heap.withinRange(iterator.index)
}
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
// If Prev() returns true, then previous element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
func (iterator *Iterator) Prev() bool {
if iterator.index >= 0 {
iterator.index--
}
return iterator.heap.withinRange(iterator.index)
}
// Value returns the current element's value.
// Does not modify the state of the iterator.
func (iterator *Iterator) Value() interface{} {
value, _ := iterator.heap.list.Get(iterator.index)
return value
}
// Index returns the current element's index.
// Does not modify the state of the iterator.
func (iterator *Iterator) Index() int {
return iterator.index
}
// Begin resets the iterator to its initial state (one-before-first)
// Call Next() to fetch the first element if any.
func (iterator *Iterator) Begin() {
iterator.index = -1
}
// End moves the iterator past the last element (one-past-the-end).
// Call Prev() to fetch the last element if any.
func (iterator *Iterator) End() {
iterator.index = iterator.heap.Size()
}
// First moves the iterator to the first element and returns true if there was a first element in the container.
// If First() returns true, then first element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
func (iterator *Iterator) First() bool {
iterator.Begin()
return iterator.Next()
}
// Last moves the iterator to the last element and returns true if there was a last element in the container.
// If Last() returns true, then last element's index and value can be retrieved by Index() and Value().
// Modifies the state of the iterator.
func (iterator *Iterator) Last() bool {
iterator.End()
return iterator.Prev()
}

View File

@@ -0,0 +1,22 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package binaryheap
import "github.com/emirpasic/gods/containers"
func assertSerializationImplementation() {
var _ containers.JSONSerializer = (*Heap)(nil)
var _ containers.JSONDeserializer = (*Heap)(nil)
}
// ToJSON outputs the JSON representation of the heap.
func (heap *Heap) ToJSON() ([]byte, error) {
return heap.list.ToJSON()
}
// FromJSON populates the heap from the input JSON representation.
func (heap *Heap) FromJSON(data []byte) error {
return heap.list.FromJSON(data)
}

View File

@@ -0,0 +1,21 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package trees provides an abstract Tree interface.
//
// In computer science, a tree is a widely used abstract data type (ADT) or data structure implementing this ADT that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes.
//
// Reference: https://en.wikipedia.org/wiki/Tree_%28data_structure%29
package trees
import "github.com/emirpasic/gods/containers"
// Tree interface that all trees implement
type Tree interface {
containers.Container
// Empty() bool
// Size() int
// Clear()
// Values() []interface{}
}

View File

@@ -0,0 +1,251 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package utils
import "time"
// Comparator will make type assertion (see IntComparator for example),
// which will panic if a or b are not of the asserted type.
//
// Should return a number:
// negative , if a < b
// zero , if a == b
// positive , if a > b
type Comparator func(a, b interface{}) int
// StringComparator provides a fast comparison on strings
func StringComparator(a, b interface{}) int {
s1 := a.(string)
s2 := b.(string)
min := len(s2)
if len(s1) < len(s2) {
min = len(s1)
}
diff := 0
for i := 0; i < min && diff == 0; i++ {
diff = int(s1[i]) - int(s2[i])
}
if diff == 0 {
diff = len(s1) - len(s2)
}
if diff < 0 {
return -1
}
if diff > 0 {
return 1
}
return 0
}
// IntComparator provides a basic comparison on int
func IntComparator(a, b interface{}) int {
aAsserted := a.(int)
bAsserted := b.(int)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// Int8Comparator provides a basic comparison on int8
func Int8Comparator(a, b interface{}) int {
aAsserted := a.(int8)
bAsserted := b.(int8)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// Int16Comparator provides a basic comparison on int16
func Int16Comparator(a, b interface{}) int {
aAsserted := a.(int16)
bAsserted := b.(int16)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// Int32Comparator provides a basic comparison on int32
func Int32Comparator(a, b interface{}) int {
aAsserted := a.(int32)
bAsserted := b.(int32)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// Int64Comparator provides a basic comparison on int64
func Int64Comparator(a, b interface{}) int {
aAsserted := a.(int64)
bAsserted := b.(int64)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// UIntComparator provides a basic comparison on uint
func UIntComparator(a, b interface{}) int {
aAsserted := a.(uint)
bAsserted := b.(uint)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// UInt8Comparator provides a basic comparison on uint8
func UInt8Comparator(a, b interface{}) int {
aAsserted := a.(uint8)
bAsserted := b.(uint8)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// UInt16Comparator provides a basic comparison on uint16
func UInt16Comparator(a, b interface{}) int {
aAsserted := a.(uint16)
bAsserted := b.(uint16)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// UInt32Comparator provides a basic comparison on uint32
func UInt32Comparator(a, b interface{}) int {
aAsserted := a.(uint32)
bAsserted := b.(uint32)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// UInt64Comparator provides a basic comparison on uint64
func UInt64Comparator(a, b interface{}) int {
aAsserted := a.(uint64)
bAsserted := b.(uint64)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// Float32Comparator provides a basic comparison on float32
func Float32Comparator(a, b interface{}) int {
aAsserted := a.(float32)
bAsserted := b.(float32)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// Float64Comparator provides a basic comparison on float64
func Float64Comparator(a, b interface{}) int {
aAsserted := a.(float64)
bAsserted := b.(float64)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// ByteComparator provides a basic comparison on byte
func ByteComparator(a, b interface{}) int {
aAsserted := a.(byte)
bAsserted := b.(byte)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// RuneComparator provides a basic comparison on rune
func RuneComparator(a, b interface{}) int {
aAsserted := a.(rune)
bAsserted := b.(rune)
switch {
case aAsserted > bAsserted:
return 1
case aAsserted < bAsserted:
return -1
default:
return 0
}
}
// TimeComparator provides a basic comparison on time.Time
func TimeComparator(a, b interface{}) int {
aAsserted := a.(time.Time)
bAsserted := b.(time.Time)
switch {
case aAsserted.After(bAsserted):
return 1
case aAsserted.Before(bAsserted):
return -1
default:
return 0
}
}

29
backend/vendor/github.com/emirpasic/gods/utils/sort.go generated vendored Normal file
View File

@@ -0,0 +1,29 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package utils
import "sort"
// Sort sorts values (in-place) with respect to the given comparator.
//
// Uses Go's sort (hybrid of quicksort for large and then insertion sort for smaller slices).
func Sort(values []interface{}, comparator Comparator) {
sort.Sort(sortable{values, comparator})
}
type sortable struct {
values []interface{}
comparator Comparator
}
func (s sortable) Len() int {
return len(s.values)
}
func (s sortable) Swap(i, j int) {
s.values[i], s.values[j] = s.values[j], s.values[i]
}
func (s sortable) Less(i, j int) bool {
return s.comparator(s.values[i], s.values[j]) < 0
}

View File

@@ -0,0 +1,47 @@
// Copyright (c) 2015, Emir Pasic. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package utils provides common utility functions.
//
// Provided functionalities:
// - sorting
// - comparators
package utils
import (
"fmt"
"strconv"
)
// ToString converts a value to string.
func ToString(value interface{}) string {
switch value.(type) {
case string:
return value.(string)
case int8:
return strconv.FormatInt(int64(value.(int8)), 10)
case int16:
return strconv.FormatInt(int64(value.(int16)), 10)
case int32:
return strconv.FormatInt(int64(value.(int32)), 10)
case int64:
return strconv.FormatInt(int64(value.(int64)), 10)
case uint8:
return strconv.FormatUint(uint64(value.(uint8)), 10)
case uint16:
return strconv.FormatUint(uint64(value.(uint16)), 10)
case uint32:
return strconv.FormatUint(uint64(value.(uint32)), 10)
case uint64:
return strconv.FormatUint(uint64(value.(uint64)), 10)
case float32:
return strconv.FormatFloat(float64(value.(float32)), 'g', -1, 64)
case float64:
return strconv.FormatFloat(float64(value.(float64)), 'g', -1, 64)
case bool:
return strconv.FormatBool(value.(bool))
default:
return fmt.Sprintf("%+v", value)
}
}

View File

@@ -1,6 +1,6 @@
MIT License
The MIT License (MIT)
Copyright (c) 2018 Royeo
Copyright (c) 2014 Juan Batiz-Benet
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

120
backend/vendor/github.com/jbenet/go-context/io/ctxio.go generated vendored Normal file
View File

@@ -0,0 +1,120 @@
// Package ctxio provides io.Reader and io.Writer wrappers that
// respect context.Contexts. Use these at the interface between
// your context code and your io.
//
// WARNING: read the code. see how writes and reads will continue
// until you cancel the io. Maybe this package should provide
// versions of io.ReadCloser and io.WriteCloser that automatically
// call .Close when the context expires. But for now -- since in my
// use cases I have long-lived connections with ephemeral io wrappers
// -- this has yet to be a need.
package ctxio
import (
"io"
context "golang.org/x/net/context"
)
type ioret struct {
n int
err error
}
type Writer interface {
io.Writer
}
type ctxWriter struct {
w io.Writer
ctx context.Context
}
// NewWriter wraps a writer to make it respect given Context.
// If there is a blocking write, the returned Writer will return
// whenever the context is cancelled (the return values are n=0
// and err=ctx.Err().)
//
// Note well: this wrapper DOES NOT ACTUALLY cancel the underlying
// write-- there is no way to do that with the standard go io
// interface. So the read and write _will_ happen or hang. So, use
// this sparingly, make sure to cancel the read or write as necesary
// (e.g. closing a connection whose context is up, etc.)
//
// Furthermore, in order to protect your memory from being read
// _after_ you've cancelled the context, this io.Writer will
// first make a **copy** of the buffer.
func NewWriter(ctx context.Context, w io.Writer) *ctxWriter {
if ctx == nil {
ctx = context.Background()
}
return &ctxWriter{ctx: ctx, w: w}
}
func (w *ctxWriter) Write(buf []byte) (int, error) {
buf2 := make([]byte, len(buf))
copy(buf2, buf)
c := make(chan ioret, 1)
go func() {
n, err := w.w.Write(buf2)
c <- ioret{n, err}
close(c)
}()
select {
case r := <-c:
return r.n, r.err
case <-w.ctx.Done():
return 0, w.ctx.Err()
}
}
type Reader interface {
io.Reader
}
type ctxReader struct {
r io.Reader
ctx context.Context
}
// NewReader wraps a reader to make it respect given Context.
// If there is a blocking read, the returned Reader will return
// whenever the context is cancelled (the return values are n=0
// and err=ctx.Err().)
//
// Note well: this wrapper DOES NOT ACTUALLY cancel the underlying
// write-- there is no way to do that with the standard go io
// interface. So the read and write _will_ happen or hang. So, use
// this sparingly, make sure to cancel the read or write as necesary
// (e.g. closing a connection whose context is up, etc.)
//
// Furthermore, in order to protect your memory from being read
// _before_ you've cancelled the context, this io.Reader will
// allocate a buffer of the same size, and **copy** into the client's
// if the read succeeds in time.
func NewReader(ctx context.Context, r io.Reader) *ctxReader {
return &ctxReader{ctx: ctx, r: r}
}
func (r *ctxReader) Read(buf []byte) (int, error) {
buf2 := make([]byte, len(buf))
c := make(chan ioret, 1)
go func() {
n, err := r.r.Read(buf2)
c <- ioret{n, err}
close(c)
}()
select {
case ret := <-c:
copy(buf, buf2)
return ret.n, ret.err
case <-r.ctx.Done():
return 0, r.ctx.Err()
}
}

View File

@@ -0,0 +1 @@
testdata/dos-lines eol=crlf

View File

View File

@@ -0,0 +1 @@
Kevin Burke <kevin@burke.dev> Kevin Burke <kev@inburke.com>

View File

@@ -0,0 +1,14 @@
go_import_path: github.com/kevinburke/ssh_config
language: go
go:
- 1.11.x
- 1.12.x
- master
before_script:
- go get -u ./...
script:
- make race-test

View File

@@ -0,0 +1,5 @@
Eugene Terentev <eugene@terentev.net>
Kevin Burke <kevin@burke.dev>
Mark Nevill <nev@improbable.io>
Sergey Lukjanov <me@slukjanov.name>
Wayne Ashley Berry <wayneashleyberry@gmail.com>

View File

@@ -0,0 +1,49 @@
Copyright (c) 2017 Kevin Burke.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
===================
The lexer and parser borrow heavily from github.com/pelletier/go-toml. The
license for that project is copied below.
The MIT License (MIT)
Copyright (c) 2013 - 2017 Thomas Pelletier, Eric Anderton
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,30 @@
BUMP_VERSION := $(GOPATH)/bin/bump_version
STATICCHECK := $(GOPATH)/bin/staticcheck
WRITE_MAILMAP := $(GOPATH)/bin/write_mailmap
$(STATICCHECK):
go get honnef.co/go/tools/cmd/staticcheck
lint: $(STATICCHECK)
go vet ./...
$(STATICCHECK)
test: lint
@# the timeout helps guard against infinite recursion
go test -timeout=250ms ./...
race-test: lint
go test -timeout=500ms -race ./...
$(BUMP_VERSION):
go get -u github.com/kevinburke/bump_version
release: test | $(BUMP_VERSION)
$(BUMP_VERSION) minor config.go
force: ;
AUTHORS.txt: force | $(WRITE_MAILMAP)
$(WRITE_MAILMAP) > AUTHORS.txt
authors: AUTHORS.txt

View File

@@ -0,0 +1,81 @@
# ssh_config
This is a Go parser for `ssh_config` files. Importantly, this parser attempts
to preserve comments in a given file, so you can manipulate a `ssh_config` file
from a program, if your heart desires.
It's designed to be used with the excellent
[x/crypto/ssh](https://golang.org/x/crypto/ssh) package, which handles SSH
negotiation but isn't very easy to configure.
The `ssh_config` `Get()` and `GetStrict()` functions will attempt to read values
from `$HOME/.ssh/config` and fall back to `/etc/ssh/ssh_config`. The first
argument is the host name to match on, and the second argument is the key you
want to retrieve.
```go
port := ssh_config.Get("myhost", "Port")
```
You can also load a config file and read values from it.
```go
var config = `
Host *.test
Compression yes
`
cfg, err := ssh_config.Decode(strings.NewReader(config))
fmt.Println(cfg.Get("example.test", "Port"))
```
Some SSH arguments have default values - for example, the default value for
`KeyboardAuthentication` is `"yes"`. If you call Get(), and no value for the
given Host/keyword pair exists in the config, we'll return a default for the
keyword if one exists.
### Manipulating SSH config files
Here's how you can manipulate an SSH config file, and then write it back to
disk.
```go
f, _ := os.Open(filepath.Join(os.Getenv("HOME"), ".ssh", "config"))
cfg, _ := ssh_config.Decode(f)
for _, host := range cfg.Hosts {
fmt.Println("patterns:", host.Patterns)
for _, node := range host.Nodes {
// Manipulate the nodes as you see fit, or use a type switch to
// distinguish between Empty, KV, and Include nodes.
fmt.Println(node.String())
}
}
// Print the config to stdout:
fmt.Println(cfg.String())
```
## Spec compliance
Wherever possible we try to implement the specification as documented in
the `ssh_config` manpage. Unimplemented features should be present in the
[issues][issues] list.
Notably, the `Match` directive is currently unsupported.
[issues]: https://github.com/kevinburke/ssh_config/issues
## Errata
This is the second [comment-preserving configuration parser][blog] I've written, after
[an /etc/hosts parser][hostsfile]. Eventually, I will write one for every Linux
file format.
[blog]: https://kev.inburke.com/kevin/more-comment-preserving-configuration-parsers/
[hostsfile]: https://github.com/kevinburke/hostsfile
## Donating
Donations free up time to make improvements to the library, and respond to
bug reports. You can send donations via Paypal's "Send Money" feature to
kev@inburke.com. Donations are not tax deductible in the USA.

View File

@@ -0,0 +1,649 @@
// Package ssh_config provides tools for manipulating SSH config files.
//
// Importantly, this parser attempts to preserve comments in a given file, so
// you can manipulate a `ssh_config` file from a program, if your heart desires.
//
// The Get() and GetStrict() functions will attempt to read values from
// $HOME/.ssh/config, falling back to /etc/ssh/ssh_config. The first argument is
// the host name to match on ("example.com"), and the second argument is the key
// you want to retrieve ("Port"). The keywords are case insensitive.
//
// port := ssh_config.Get("myhost", "Port")
//
// You can also manipulate an SSH config file and then print it or write it back
// to disk.
//
// f, _ := os.Open(filepath.Join(os.Getenv("HOME"), ".ssh", "config"))
// cfg, _ := ssh_config.Decode(f)
// for _, host := range cfg.Hosts {
// fmt.Println("patterns:", host.Patterns)
// for _, node := range host.Nodes {
// fmt.Println(node.String())
// }
// }
//
// // Write the cfg back to disk:
// fmt.Println(cfg.String())
//
// BUG: the Match directive is currently unsupported; parsing a config with
// a Match directive will trigger an error.
package ssh_config
import (
"bytes"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
osuser "os/user"
"path/filepath"
"regexp"
"runtime"
"strings"
"sync"
)
const version = "1.0"
var _ = version
type configFinder func() string
// UserSettings checks ~/.ssh and /etc/ssh for configuration files. The config
// files are parsed and cached the first time Get() or GetStrict() is called.
type UserSettings struct {
IgnoreErrors bool
systemConfig *Config
systemConfigFinder configFinder
userConfig *Config
userConfigFinder configFinder
loadConfigs sync.Once
onceErr error
}
func homedir() string {
user, err := osuser.Current()
if err == nil {
return user.HomeDir
} else {
return os.Getenv("HOME")
}
}
func userConfigFinder() string {
return filepath.Join(homedir(), ".ssh", "config")
}
// DefaultUserSettings is the default UserSettings and is used by Get and
// GetStrict. It checks both $HOME/.ssh/config and /etc/ssh/ssh_config for keys,
// and it will return parse errors (if any) instead of swallowing them.
var DefaultUserSettings = &UserSettings{
IgnoreErrors: false,
systemConfigFinder: systemConfigFinder,
userConfigFinder: userConfigFinder,
}
func systemConfigFinder() string {
return filepath.Join("/", "etc", "ssh", "ssh_config")
}
func findVal(c *Config, alias, key string) (string, error) {
if c == nil {
return "", nil
}
val, err := c.Get(alias, key)
if err != nil || val == "" {
return "", err
}
if err := validate(key, val); err != nil {
return "", err
}
return val, nil
}
// Get finds the first value for key within a declaration that matches the
// alias. Get returns the empty string if no value was found, or if IgnoreErrors
// is false and we could not parse the configuration file. Use GetStrict to
// disambiguate the latter cases.
//
// The match for key is case insensitive.
//
// Get is a wrapper around DefaultUserSettings.Get.
func Get(alias, key string) string {
return DefaultUserSettings.Get(alias, key)
}
// GetStrict finds the first value for key within a declaration that matches the
// alias. If key has a default value and no matching configuration is found, the
// default will be returned. For more information on default values and the way
// patterns are matched, see the manpage for ssh_config.
//
// error will be non-nil if and only if a user's configuration file or the
// system configuration file could not be parsed, and u.IgnoreErrors is false.
//
// GetStrict is a wrapper around DefaultUserSettings.GetStrict.
func GetStrict(alias, key string) (string, error) {
return DefaultUserSettings.GetStrict(alias, key)
}
// Get finds the first value for key within a declaration that matches the
// alias. Get returns the empty string if no value was found, or if IgnoreErrors
// is false and we could not parse the configuration file. Use GetStrict to
// disambiguate the latter cases.
//
// The match for key is case insensitive.
func (u *UserSettings) Get(alias, key string) string {
val, err := u.GetStrict(alias, key)
if err != nil {
return ""
}
return val
}
// GetStrict finds the first value for key within a declaration that matches the
// alias. If key has a default value and no matching configuration is found, the
// default will be returned. For more information on default values and the way
// patterns are matched, see the manpage for ssh_config.
//
// error will be non-nil if and only if a user's configuration file or the
// system configuration file could not be parsed, and u.IgnoreErrors is false.
func (u *UserSettings) GetStrict(alias, key string) (string, error) {
u.loadConfigs.Do(func() {
// can't parse user file, that's ok.
var filename string
if u.userConfigFinder == nil {
filename = userConfigFinder()
} else {
filename = u.userConfigFinder()
}
var err error
u.userConfig, err = parseFile(filename)
//lint:ignore S1002 I prefer it this way
if err != nil && os.IsNotExist(err) == false {
u.onceErr = err
return
}
if u.systemConfigFinder == nil {
filename = systemConfigFinder()
} else {
filename = u.systemConfigFinder()
}
u.systemConfig, err = parseFile(filename)
//lint:ignore S1002 I prefer it this way
if err != nil && os.IsNotExist(err) == false {
u.onceErr = err
return
}
})
//lint:ignore S1002 I prefer it this way
if u.onceErr != nil && u.IgnoreErrors == false {
return "", u.onceErr
}
val, err := findVal(u.userConfig, alias, key)
if err != nil || val != "" {
return val, err
}
val2, err2 := findVal(u.systemConfig, alias, key)
if err2 != nil || val2 != "" {
return val2, err2
}
return Default(key), nil
}
func parseFile(filename string) (*Config, error) {
return parseWithDepth(filename, 0)
}
func parseWithDepth(filename string, depth uint8) (*Config, error) {
b, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
return decodeBytes(b, isSystem(filename), depth)
}
func isSystem(filename string) bool {
// TODO: not sure this is the best way to detect a system repo
return strings.HasPrefix(filepath.Clean(filename), "/etc/ssh")
}
// Decode reads r into a Config, or returns an error if r could not be parsed as
// an SSH config file.
func Decode(r io.Reader) (*Config, error) {
b, err := ioutil.ReadAll(r)
if err != nil {
return nil, err
}
return decodeBytes(b, false, 0)
}
func decodeBytes(b []byte, system bool, depth uint8) (c *Config, err error) {
defer func() {
if r := recover(); r != nil {
if _, ok := r.(runtime.Error); ok {
panic(r)
}
if e, ok := r.(error); ok && e == ErrDepthExceeded {
err = e
return
}
err = errors.New(r.(string))
}
}()
c = parseSSH(lexSSH(b), system, depth)
return c, err
}
// Config represents an SSH config file.
type Config struct {
// A list of hosts to match against. The file begins with an implicit
// "Host *" declaration matching all hosts.
Hosts []*Host
depth uint8
position Position
}
// Get finds the first value in the configuration that matches the alias and
// contains key. Get returns the empty string if no value was found, or if the
// Config contains an invalid conditional Include value.
//
// The match for key is case insensitive.
func (c *Config) Get(alias, key string) (string, error) {
lowerKey := strings.ToLower(key)
for _, host := range c.Hosts {
if !host.Matches(alias) {
continue
}
for _, node := range host.Nodes {
switch t := node.(type) {
case *Empty:
continue
case *KV:
// "keys are case insensitive" per the spec
lkey := strings.ToLower(t.Key)
if lkey == "match" {
panic("can't handle Match directives")
}
if lkey == lowerKey {
return t.Value, nil
}
case *Include:
val := t.Get(alias, key)
if val != "" {
return val, nil
}
default:
return "", fmt.Errorf("unknown Node type %v", t)
}
}
}
return "", nil
}
// String returns a string representation of the Config file.
func (c Config) String() string {
return marshal(c).String()
}
func (c Config) MarshalText() ([]byte, error) {
return marshal(c).Bytes(), nil
}
func marshal(c Config) *bytes.Buffer {
var buf bytes.Buffer
for i := range c.Hosts {
buf.WriteString(c.Hosts[i].String())
}
return &buf
}
// Pattern is a pattern in a Host declaration. Patterns are read-only values;
// create a new one with NewPattern().
type Pattern struct {
str string // Its appearance in the file, not the value that gets compiled.
regex *regexp.Regexp
not bool // True if this is a negated match
}
// String prints the string representation of the pattern.
func (p Pattern) String() string {
return p.str
}
// Copied from regexp.go with * and ? removed.
var specialBytes = []byte(`\.+()|[]{}^$`)
func special(b byte) bool {
return bytes.IndexByte(specialBytes, b) >= 0
}
// NewPattern creates a new Pattern for matching hosts. NewPattern("*") creates
// a Pattern that matches all hosts.
//
// From the manpage, a pattern consists of zero or more non-whitespace
// characters, `*' (a wildcard that matches zero or more characters), or `?' (a
// wildcard that matches exactly one character). For example, to specify a set
// of declarations for any host in the ".co.uk" set of domains, the following
// pattern could be used:
//
// Host *.co.uk
//
// The following pattern would match any host in the 192.168.0.[0-9] network range:
//
// Host 192.168.0.?
func NewPattern(s string) (*Pattern, error) {
if s == "" {
return nil, errors.New("ssh_config: empty pattern")
}
negated := false
if s[0] == '!' {
negated = true
s = s[1:]
}
var buf bytes.Buffer
buf.WriteByte('^')
for i := 0; i < len(s); i++ {
// A byte loop is correct because all metacharacters are ASCII.
switch b := s[i]; b {
case '*':
buf.WriteString(".*")
case '?':
buf.WriteString(".?")
default:
// borrowing from QuoteMeta here.
if special(b) {
buf.WriteByte('\\')
}
buf.WriteByte(b)
}
}
buf.WriteByte('$')
r, err := regexp.Compile(buf.String())
if err != nil {
return nil, err
}
return &Pattern{str: s, regex: r, not: negated}, nil
}
// Host describes a Host directive and the keywords that follow it.
type Host struct {
// A list of host patterns that should match this host.
Patterns []*Pattern
// A Node is either a key/value pair or a comment line.
Nodes []Node
// EOLComment is the comment (if any) terminating the Host line.
EOLComment string
hasEquals bool
leadingSpace int // TODO: handle spaces vs tabs here.
// The file starts with an implicit "Host *" declaration.
implicit bool
}
// Matches returns true if the Host matches for the given alias. For
// a description of the rules that provide a match, see the manpage for
// ssh_config.
func (h *Host) Matches(alias string) bool {
found := false
for i := range h.Patterns {
if h.Patterns[i].regex.MatchString(alias) {
if h.Patterns[i].not {
// Negated match. "A pattern entry may be negated by prefixing
// it with an exclamation mark (`!'). If a negated entry is
// matched, then the Host entry is ignored, regardless of
// whether any other patterns on the line match. Negated matches
// are therefore useful to provide exceptions for wildcard
// matches."
return false
}
found = true
}
}
return found
}
// String prints h as it would appear in a config file. Minor tweaks may be
// present in the whitespace in the printed file.
func (h *Host) String() string {
var buf bytes.Buffer
//lint:ignore S1002 I prefer to write it this way
if h.implicit == false {
buf.WriteString(strings.Repeat(" ", int(h.leadingSpace)))
buf.WriteString("Host")
if h.hasEquals {
buf.WriteString(" = ")
} else {
buf.WriteString(" ")
}
for i, pat := range h.Patterns {
buf.WriteString(pat.String())
if i < len(h.Patterns)-1 {
buf.WriteString(" ")
}
}
if h.EOLComment != "" {
buf.WriteString(" #")
buf.WriteString(h.EOLComment)
}
buf.WriteByte('\n')
}
for i := range h.Nodes {
buf.WriteString(h.Nodes[i].String())
buf.WriteByte('\n')
}
return buf.String()
}
// Node represents a line in a Config.
type Node interface {
Pos() Position
String() string
}
// KV is a line in the config file that contains a key, a value, and possibly
// a comment.
type KV struct {
Key string
Value string
Comment string
hasEquals bool
leadingSpace int // Space before the key. TODO handle spaces vs tabs.
position Position
}
// Pos returns k's Position.
func (k *KV) Pos() Position {
return k.position
}
// String prints k as it was parsed in the config file. There may be slight
// changes to the whitespace between values.
func (k *KV) String() string {
if k == nil {
return ""
}
equals := " "
if k.hasEquals {
equals = " = "
}
line := fmt.Sprintf("%s%s%s%s", strings.Repeat(" ", int(k.leadingSpace)), k.Key, equals, k.Value)
if k.Comment != "" {
line += " #" + k.Comment
}
return line
}
// Empty is a line in the config file that contains only whitespace or comments.
type Empty struct {
Comment string
leadingSpace int // TODO handle spaces vs tabs.
position Position
}
// Pos returns e's Position.
func (e *Empty) Pos() Position {
return e.position
}
// String prints e as it was parsed in the config file.
func (e *Empty) String() string {
if e == nil {
return ""
}
if e.Comment == "" {
return ""
}
return fmt.Sprintf("%s#%s", strings.Repeat(" ", int(e.leadingSpace)), e.Comment)
}
// Include holds the result of an Include directive, including the config files
// that have been parsed as part of that directive. At most 5 levels of Include
// statements will be parsed.
type Include struct {
// Comment is the contents of any comment at the end of the Include
// statement.
Comment string
// an include directive can include several different files, and wildcards
directives []string
mu sync.Mutex
// 1:1 mapping between matches and keys in files array; matches preserves
// ordering
matches []string
// actual filenames are listed here
files map[string]*Config
leadingSpace int
position Position
depth uint8
hasEquals bool
}
const maxRecurseDepth = 5
// ErrDepthExceeded is returned if too many Include directives are parsed.
// Usually this indicates a recursive loop (an Include directive pointing to the
// file it contains).
var ErrDepthExceeded = errors.New("ssh_config: max recurse depth exceeded")
func removeDups(arr []string) []string {
// Use map to record duplicates as we find them.
encountered := make(map[string]bool, len(arr))
result := make([]string, 0)
for v := range arr {
//lint:ignore S1002 I prefer it this way
if encountered[arr[v]] == false {
encountered[arr[v]] = true
result = append(result, arr[v])
}
}
return result
}
// NewInclude creates a new Include with a list of file globs to include.
// Configuration files are parsed greedily (e.g. as soon as this function runs).
// Any error encountered while parsing nested configuration files will be
// returned.
func NewInclude(directives []string, hasEquals bool, pos Position, comment string, system bool, depth uint8) (*Include, error) {
if depth > maxRecurseDepth {
return nil, ErrDepthExceeded
}
inc := &Include{
Comment: comment,
directives: directives,
files: make(map[string]*Config),
position: pos,
leadingSpace: pos.Col - 1,
depth: depth,
hasEquals: hasEquals,
}
// no need for inc.mu.Lock() since nothing else can access this inc
matches := make([]string, 0)
for i := range directives {
var path string
if filepath.IsAbs(directives[i]) {
path = directives[i]
} else if system {
path = filepath.Join("/etc/ssh", directives[i])
} else {
path = filepath.Join(homedir(), ".ssh", directives[i])
}
theseMatches, err := filepath.Glob(path)
if err != nil {
return nil, err
}
matches = append(matches, theseMatches...)
}
matches = removeDups(matches)
inc.matches = matches
for i := range matches {
config, err := parseWithDepth(matches[i], depth)
if err != nil {
return nil, err
}
inc.files[matches[i]] = config
}
return inc, nil
}
// Pos returns the position of the Include directive in the larger file.
func (i *Include) Pos() Position {
return i.position
}
// Get finds the first value in the Include statement matching the alias and the
// given key.
func (inc *Include) Get(alias, key string) string {
inc.mu.Lock()
defer inc.mu.Unlock()
// TODO: we search files in any order which is not correct
for i := range inc.matches {
cfg := inc.files[inc.matches[i]]
if cfg == nil {
panic("nil cfg")
}
val, err := cfg.Get(alias, key)
if err == nil && val != "" {
return val
}
}
return ""
}
// String prints out a string representation of this Include directive. Note
// included Config files are not printed as part of this representation.
func (inc *Include) String() string {
equals := " "
if inc.hasEquals {
equals = " = "
}
line := fmt.Sprintf("%sInclude%s%s", strings.Repeat(" ", int(inc.leadingSpace)), equals, strings.Join(inc.directives, " "))
if inc.Comment != "" {
line += " #" + inc.Comment
}
return line
}
var matchAll *Pattern
func init() {
var err error
matchAll, err = NewPattern("*")
if err != nil {
panic(err)
}
}
func newConfig() *Config {
return &Config{
Hosts: []*Host{
&Host{
implicit: true,
Patterns: []*Pattern{matchAll},
Nodes: make([]Node, 0),
},
},
depth: 0,
}
}

View File

@@ -0,0 +1,240 @@
package ssh_config
import (
"bytes"
)
// Define state functions
type sshLexStateFn func() sshLexStateFn
type sshLexer struct {
inputIdx int
input []rune // Textual source
buffer []rune // Runes composing the current token
tokens chan token
line int
col int
endbufferLine int
endbufferCol int
}
func (s *sshLexer) lexComment(previousState sshLexStateFn) sshLexStateFn {
return func() sshLexStateFn {
growingString := ""
for next := s.peek(); next != '\n' && next != eof; next = s.peek() {
if next == '\r' && s.follow("\r\n") {
break
}
growingString += string(next)
s.next()
}
s.emitWithValue(tokenComment, growingString)
s.skip()
return previousState
}
}
// lex the space after an equals sign in a function
func (s *sshLexer) lexRspace() sshLexStateFn {
for {
next := s.peek()
if !isSpace(next) {
break
}
s.skip()
}
return s.lexRvalue
}
func (s *sshLexer) lexEquals() sshLexStateFn {
for {
next := s.peek()
if next == '=' {
s.emit(tokenEquals)
s.skip()
return s.lexRspace
}
// TODO error handling here; newline eof etc.
if !isSpace(next) {
break
}
s.skip()
}
return s.lexRvalue
}
func (s *sshLexer) lexKey() sshLexStateFn {
growingString := ""
for r := s.peek(); isKeyChar(r); r = s.peek() {
// simplified a lot here
if isSpace(r) || r == '=' {
s.emitWithValue(tokenKey, growingString)
s.skip()
return s.lexEquals
}
growingString += string(r)
s.next()
}
s.emitWithValue(tokenKey, growingString)
return s.lexEquals
}
func (s *sshLexer) lexRvalue() sshLexStateFn {
growingString := ""
for {
next := s.peek()
switch next {
case '\r':
if s.follow("\r\n") {
s.emitWithValue(tokenString, growingString)
s.skip()
return s.lexVoid
}
case '\n':
s.emitWithValue(tokenString, growingString)
s.skip()
return s.lexVoid
case '#':
s.emitWithValue(tokenString, growingString)
s.skip()
return s.lexComment(s.lexVoid)
case eof:
s.next()
}
if next == eof {
break
}
growingString += string(next)
s.next()
}
s.emit(tokenEOF)
return nil
}
func (s *sshLexer) read() rune {
r := s.peek()
if r == '\n' {
s.endbufferLine++
s.endbufferCol = 1
} else {
s.endbufferCol++
}
s.inputIdx++
return r
}
func (s *sshLexer) next() rune {
r := s.read()
if r != eof {
s.buffer = append(s.buffer, r)
}
return r
}
func (s *sshLexer) lexVoid() sshLexStateFn {
for {
next := s.peek()
switch next {
case '#':
s.skip()
return s.lexComment(s.lexVoid)
case '\r':
fallthrough
case '\n':
s.emit(tokenEmptyLine)
s.skip()
continue
}
if isSpace(next) {
s.skip()
}
if isKeyStartChar(next) {
return s.lexKey
}
// removed IsKeyStartChar and lexKey. probably will need to readd
if next == eof {
s.next()
break
}
}
s.emit(tokenEOF)
return nil
}
func (s *sshLexer) ignore() {
s.buffer = make([]rune, 0)
s.line = s.endbufferLine
s.col = s.endbufferCol
}
func (s *sshLexer) skip() {
s.next()
s.ignore()
}
func (s *sshLexer) emit(t tokenType) {
s.emitWithValue(t, string(s.buffer))
}
func (s *sshLexer) emitWithValue(t tokenType, value string) {
tok := token{
Position: Position{s.line, s.col},
typ: t,
val: value,
}
s.tokens <- tok
s.ignore()
}
func (s *sshLexer) peek() rune {
if s.inputIdx >= len(s.input) {
return eof
}
r := s.input[s.inputIdx]
return r
}
func (s *sshLexer) follow(next string) bool {
inputIdx := s.inputIdx
for _, expectedRune := range next {
if inputIdx >= len(s.input) {
return false
}
r := s.input[inputIdx]
inputIdx++
if expectedRune != r {
return false
}
}
return true
}
func (s *sshLexer) run() {
for state := s.lexVoid; state != nil; {
state = state()
}
close(s.tokens)
}
func lexSSH(input []byte) chan token {
runes := bytes.Runes(input)
l := &sshLexer{
input: runes,
tokens: make(chan token),
line: 1,
col: 1,
endbufferLine: 1,
endbufferCol: 1,
}
go l.run()
return l.tokens
}

View File

@@ -0,0 +1,191 @@
package ssh_config
import (
"fmt"
"strings"
)
type sshParser struct {
flow chan token
config *Config
tokensBuffer []token
currentTable []string
seenTableKeys []string
// /etc/ssh parser or local parser - used to find the default for relative
// filepaths in the Include directive
system bool
depth uint8
}
type sshParserStateFn func() sshParserStateFn
// Formats and panics an error message based on a token
func (p *sshParser) raiseErrorf(tok *token, msg string, args ...interface{}) {
// TODO this format is ugly
panic(tok.Position.String() + ": " + fmt.Sprintf(msg, args...))
}
func (p *sshParser) raiseError(tok *token, err error) {
if err == ErrDepthExceeded {
panic(err)
}
// TODO this format is ugly
panic(tok.Position.String() + ": " + err.Error())
}
func (p *sshParser) run() {
for state := p.parseStart; state != nil; {
state = state()
}
}
func (p *sshParser) peek() *token {
if len(p.tokensBuffer) != 0 {
return &(p.tokensBuffer[0])
}
tok, ok := <-p.flow
if !ok {
return nil
}
p.tokensBuffer = append(p.tokensBuffer, tok)
return &tok
}
func (p *sshParser) getToken() *token {
if len(p.tokensBuffer) != 0 {
tok := p.tokensBuffer[0]
p.tokensBuffer = p.tokensBuffer[1:]
return &tok
}
tok, ok := <-p.flow
if !ok {
return nil
}
return &tok
}
func (p *sshParser) parseStart() sshParserStateFn {
tok := p.peek()
// end of stream, parsing is finished
if tok == nil {
return nil
}
switch tok.typ {
case tokenComment, tokenEmptyLine:
return p.parseComment
case tokenKey:
return p.parseKV
case tokenEOF:
return nil
default:
p.raiseErrorf(tok, fmt.Sprintf("unexpected token %q\n", tok))
}
return nil
}
func (p *sshParser) parseKV() sshParserStateFn {
key := p.getToken()
hasEquals := false
val := p.getToken()
if val.typ == tokenEquals {
hasEquals = true
val = p.getToken()
}
comment := ""
tok := p.peek()
if tok == nil {
tok = &token{typ: tokenEOF}
}
if tok.typ == tokenComment && tok.Position.Line == val.Position.Line {
tok = p.getToken()
comment = tok.val
}
if strings.ToLower(key.val) == "match" {
// https://github.com/kevinburke/ssh_config/issues/6
p.raiseErrorf(val, "ssh_config: Match directive parsing is unsupported")
return nil
}
if strings.ToLower(key.val) == "host" {
strPatterns := strings.Split(val.val, " ")
patterns := make([]*Pattern, 0)
for i := range strPatterns {
if strPatterns[i] == "" {
continue
}
pat, err := NewPattern(strPatterns[i])
if err != nil {
p.raiseErrorf(val, "Invalid host pattern: %v", err)
return nil
}
patterns = append(patterns, pat)
}
p.config.Hosts = append(p.config.Hosts, &Host{
Patterns: patterns,
Nodes: make([]Node, 0),
EOLComment: comment,
hasEquals: hasEquals,
})
return p.parseStart
}
lastHost := p.config.Hosts[len(p.config.Hosts)-1]
if strings.ToLower(key.val) == "include" {
inc, err := NewInclude(strings.Split(val.val, " "), hasEquals, key.Position, comment, p.system, p.depth+1)
if err == ErrDepthExceeded {
p.raiseError(val, err)
return nil
}
if err != nil {
p.raiseErrorf(val, "Error parsing Include directive: %v", err)
return nil
}
lastHost.Nodes = append(lastHost.Nodes, inc)
return p.parseStart
}
kv := &KV{
Key: key.val,
Value: val.val,
Comment: comment,
hasEquals: hasEquals,
leadingSpace: key.Position.Col - 1,
position: key.Position,
}
lastHost.Nodes = append(lastHost.Nodes, kv)
return p.parseStart
}
func (p *sshParser) parseComment() sshParserStateFn {
comment := p.getToken()
lastHost := p.config.Hosts[len(p.config.Hosts)-1]
lastHost.Nodes = append(lastHost.Nodes, &Empty{
Comment: comment.val,
// account for the "#" as well
leadingSpace: comment.Position.Col - 2,
position: comment.Position,
})
return p.parseStart
}
func parseSSH(flow chan token, system bool, depth uint8) *Config {
// Ensure we consume tokens to completion even if parser exits early
defer func() {
for range flow {
}
}()
result := newConfig()
result.position = Position{1, 1}
parser := &sshParser{
flow: flow,
config: result,
tokensBuffer: make([]token, 0),
currentTable: make([]string, 0),
seenTableKeys: make([]string, 0),
system: system,
depth: depth,
}
parser.run()
return result
}

View File

@@ -0,0 +1,25 @@
package ssh_config
import "fmt"
// Position of a document element within a SSH document.
//
// Line and Col are both 1-indexed positions for the element's line number and
// column number, respectively. Values of zero or less will cause Invalid(),
// to return true.
type Position struct {
Line int // line within the document
Col int // column within the line
}
// String representation of the position.
// Displays 1-indexed line and column numbers.
func (p Position) String() string {
return fmt.Sprintf("(%d, %d)", p.Line, p.Col)
}
// Invalid returns whether or not the position is valid (i.e. with negative or
// null values)
func (p Position) Invalid() bool {
return p.Line <= 0 || p.Col <= 0
}

View File

@@ -0,0 +1,49 @@
package ssh_config
import "fmt"
type token struct {
Position
typ tokenType
val string
}
func (t token) String() string {
switch t.typ {
case tokenEOF:
return "EOF"
}
return fmt.Sprintf("%q", t.val)
}
type tokenType int
const (
eof = -(iota + 1)
)
const (
tokenError tokenType = iota
tokenEOF
tokenEmptyLine
tokenComment
tokenKey
tokenEquals
tokenString
)
func isSpace(r rune) bool {
return r == ' ' || r == '\t'
}
func isKeyStartChar(r rune) bool {
return !(isSpace(r) || r == '\r' || r == '\n' || r == eof)
}
// I'm not sure that this is correct
func isKeyChar(r rune) bool {
// Keys start with the first character that isn't whitespace or [ and end
// with the last non-whitespace character before the equals sign. Keys
// cannot contain a # character."
return !(r == '\r' || r == '\n' || r == eof || r == '=')
}

View File

@@ -0,0 +1,162 @@
package ssh_config
import (
"fmt"
"strconv"
"strings"
)
// Default returns the default value for the given keyword, for example "22" if
// the keyword is "Port". Default returns the empty string if the keyword has no
// default, or if the keyword is unknown. Keyword matching is case-insensitive.
//
// Default values are provided by OpenSSH_7.4p1 on a Mac.
func Default(keyword string) string {
return defaults[strings.ToLower(keyword)]
}
// Arguments where the value must be "yes" or "no" and *only* yes or no.
var yesnos = map[string]bool{
strings.ToLower("BatchMode"): true,
strings.ToLower("CanonicalizeFallbackLocal"): true,
strings.ToLower("ChallengeResponseAuthentication"): true,
strings.ToLower("CheckHostIP"): true,
strings.ToLower("ClearAllForwardings"): true,
strings.ToLower("Compression"): true,
strings.ToLower("EnableSSHKeysign"): true,
strings.ToLower("ExitOnForwardFailure"): true,
strings.ToLower("ForwardAgent"): true,
strings.ToLower("ForwardX11"): true,
strings.ToLower("ForwardX11Trusted"): true,
strings.ToLower("GatewayPorts"): true,
strings.ToLower("GSSAPIAuthentication"): true,
strings.ToLower("GSSAPIDelegateCredentials"): true,
strings.ToLower("HostbasedAuthentication"): true,
strings.ToLower("IdentitiesOnly"): true,
strings.ToLower("KbdInteractiveAuthentication"): true,
strings.ToLower("NoHostAuthenticationForLocalhost"): true,
strings.ToLower("PasswordAuthentication"): true,
strings.ToLower("PermitLocalCommand"): true,
strings.ToLower("PubkeyAuthentication"): true,
strings.ToLower("RhostsRSAAuthentication"): true,
strings.ToLower("RSAAuthentication"): true,
strings.ToLower("StreamLocalBindUnlink"): true,
strings.ToLower("TCPKeepAlive"): true,
strings.ToLower("UseKeychain"): true,
strings.ToLower("UsePrivilegedPort"): true,
strings.ToLower("VisualHostKey"): true,
}
var uints = map[string]bool{
strings.ToLower("CanonicalizeMaxDots"): true,
strings.ToLower("CompressionLevel"): true, // 1 to 9
strings.ToLower("ConnectionAttempts"): true,
strings.ToLower("ConnectTimeout"): true,
strings.ToLower("NumberOfPasswordPrompts"): true,
strings.ToLower("Port"): true,
strings.ToLower("ServerAliveCountMax"): true,
strings.ToLower("ServerAliveInterval"): true,
}
func mustBeYesOrNo(lkey string) bool {
return yesnos[lkey]
}
func mustBeUint(lkey string) bool {
return uints[lkey]
}
func validate(key, val string) error {
lkey := strings.ToLower(key)
if mustBeYesOrNo(lkey) && (val != "yes" && val != "no") {
return fmt.Errorf("ssh_config: value for key %q must be 'yes' or 'no', got %q", key, val)
}
if mustBeUint(lkey) {
_, err := strconv.ParseUint(val, 10, 64)
if err != nil {
return fmt.Errorf("ssh_config: %v", err)
}
}
return nil
}
var defaults = map[string]string{
strings.ToLower("AddKeysToAgent"): "no",
strings.ToLower("AddressFamily"): "any",
strings.ToLower("BatchMode"): "no",
strings.ToLower("CanonicalizeFallbackLocal"): "yes",
strings.ToLower("CanonicalizeHostname"): "no",
strings.ToLower("CanonicalizeMaxDots"): "1",
strings.ToLower("ChallengeResponseAuthentication"): "yes",
strings.ToLower("CheckHostIP"): "yes",
// TODO is this still the correct cipher
strings.ToLower("Cipher"): "3des",
strings.ToLower("Ciphers"): "chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc",
strings.ToLower("ClearAllForwardings"): "no",
strings.ToLower("Compression"): "no",
strings.ToLower("CompressionLevel"): "6",
strings.ToLower("ConnectionAttempts"): "1",
strings.ToLower("ControlMaster"): "no",
strings.ToLower("EnableSSHKeysign"): "no",
strings.ToLower("EscapeChar"): "~",
strings.ToLower("ExitOnForwardFailure"): "no",
strings.ToLower("FingerprintHash"): "sha256",
strings.ToLower("ForwardAgent"): "no",
strings.ToLower("ForwardX11"): "no",
strings.ToLower("ForwardX11Timeout"): "20m",
strings.ToLower("ForwardX11Trusted"): "no",
strings.ToLower("GatewayPorts"): "no",
strings.ToLower("GlobalKnownHostsFile"): "/etc/ssh/ssh_known_hosts /etc/ssh/ssh_known_hosts2",
strings.ToLower("GSSAPIAuthentication"): "no",
strings.ToLower("GSSAPIDelegateCredentials"): "no",
strings.ToLower("HashKnownHosts"): "no",
strings.ToLower("HostbasedAuthentication"): "no",
strings.ToLower("HostbasedKeyTypes"): "ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-rsa",
strings.ToLower("HostKeyAlgorithms"): "ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-rsa",
// HostName has a dynamic default (the value passed at the command line).
strings.ToLower("IdentitiesOnly"): "no",
strings.ToLower("IdentityFile"): "~/.ssh/identity",
// IPQoS has a dynamic default based on interactive or non-interactive
// sessions.
strings.ToLower("KbdInteractiveAuthentication"): "yes",
strings.ToLower("KexAlgorithms"): "curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1",
strings.ToLower("LogLevel"): "INFO",
strings.ToLower("MACs"): "umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1",
strings.ToLower("NoHostAuthenticationForLocalhost"): "no",
strings.ToLower("NumberOfPasswordPrompts"): "3",
strings.ToLower("PasswordAuthentication"): "yes",
strings.ToLower("PermitLocalCommand"): "no",
strings.ToLower("Port"): "22",
strings.ToLower("PreferredAuthentications"): "gssapi-with-mic,hostbased,publickey,keyboard-interactive,password",
strings.ToLower("Protocol"): "2",
strings.ToLower("ProxyUseFdpass"): "no",
strings.ToLower("PubkeyAcceptedKeyTypes"): "ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-rsa",
strings.ToLower("PubkeyAuthentication"): "yes",
strings.ToLower("RekeyLimit"): "default none",
strings.ToLower("RhostsRSAAuthentication"): "no",
strings.ToLower("RSAAuthentication"): "yes",
strings.ToLower("ServerAliveCountMax"): "3",
strings.ToLower("ServerAliveInterval"): "0",
strings.ToLower("StreamLocalBindMask"): "0177",
strings.ToLower("StreamLocalBindUnlink"): "no",
strings.ToLower("StrictHostKeyChecking"): "ask",
strings.ToLower("TCPKeepAlive"): "yes",
strings.ToLower("Tunnel"): "no",
strings.ToLower("TunnelDevice"): "any:any",
strings.ToLower("UpdateHostKeys"): "no",
strings.ToLower("UseKeychain"): "no",
strings.ToLower("UsePrivilegedPort"): "no",
strings.ToLower("UserKnownHostsFile"): "~/.ssh/known_hosts ~/.ssh/known_hosts2",
strings.ToLower("VerifyHostKeyDNS"): "no",
strings.ToLower("VisualHostKey"): "no",
strings.ToLower("XAuthLocation"): "/usr/X11R6/bin/xauth",
}

21
backend/vendor/github.com/mitchellh/go-homedir/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013 Mitchell Hashimoto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,14 @@
# go-homedir
This is a Go library for detecting the user's home directory without
the use of cgo, so the library can be used in cross-compilation environments.
Usage is incredibly simple, just call `homedir.Dir()` to get the home directory
for a user, and `homedir.Expand()` to expand the `~` in a path to the home
directory.
**Why not just use `os/user`?** The built-in `os/user` package requires
cgo on Darwin systems. This means that any Go code that uses that package
cannot cross compile. But 99% of the time the use for `os/user` is just to
retrieve the home directory, which we can do for the current user without
cgo. This library does that, enabling cross-compilation.

View File

@@ -0,0 +1 @@
module github.com/mitchellh/go-homedir

View File

@@ -0,0 +1,167 @@
package homedir
import (
"bytes"
"errors"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync"
)
// DisableCache will disable caching of the home directory. Caching is enabled
// by default.
var DisableCache bool
var homedirCache string
var cacheLock sync.RWMutex
// Dir returns the home directory for the executing user.
//
// This uses an OS-specific method for discovering the home directory.
// An error is returned if a home directory cannot be detected.
func Dir() (string, error) {
if !DisableCache {
cacheLock.RLock()
cached := homedirCache
cacheLock.RUnlock()
if cached != "" {
return cached, nil
}
}
cacheLock.Lock()
defer cacheLock.Unlock()
var result string
var err error
if runtime.GOOS == "windows" {
result, err = dirWindows()
} else {
// Unix-like system, so just assume Unix
result, err = dirUnix()
}
if err != nil {
return "", err
}
homedirCache = result
return result, nil
}
// Expand expands the path to include the home directory if the path
// is prefixed with `~`. If it isn't prefixed with `~`, the path is
// returned as-is.
func Expand(path string) (string, error) {
if len(path) == 0 {
return path, nil
}
if path[0] != '~' {
return path, nil
}
if len(path) > 1 && path[1] != '/' && path[1] != '\\' {
return "", errors.New("cannot expand user-specific home dir")
}
dir, err := Dir()
if err != nil {
return "", err
}
return filepath.Join(dir, path[1:]), nil
}
// Reset clears the cache, forcing the next call to Dir to re-detect
// the home directory. This generally never has to be called, but can be
// useful in tests if you're modifying the home directory via the HOME
// env var or something.
func Reset() {
cacheLock.Lock()
defer cacheLock.Unlock()
homedirCache = ""
}
func dirUnix() (string, error) {
homeEnv := "HOME"
if runtime.GOOS == "plan9" {
// On plan9, env vars are lowercase.
homeEnv = "home"
}
// First prefer the HOME environmental variable
if home := os.Getenv(homeEnv); home != "" {
return home, nil
}
var stdout bytes.Buffer
// If that fails, try OS specific commands
if runtime.GOOS == "darwin" {
cmd := exec.Command("sh", "-c", `dscl -q . -read /Users/"$(whoami)" NFSHomeDirectory | sed 's/^[^ ]*: //'`)
cmd.Stdout = &stdout
if err := cmd.Run(); err == nil {
result := strings.TrimSpace(stdout.String())
if result != "" {
return result, nil
}
}
} else {
cmd := exec.Command("getent", "passwd", strconv.Itoa(os.Getuid()))
cmd.Stdout = &stdout
if err := cmd.Run(); err != nil {
// If the error is ErrNotFound, we ignore it. Otherwise, return it.
if err != exec.ErrNotFound {
return "", err
}
} else {
if passwd := strings.TrimSpace(stdout.String()); passwd != "" {
// username:password:uid:gid:gecos:home:shell
passwdParts := strings.SplitN(passwd, ":", 7)
if len(passwdParts) > 5 {
return passwdParts[5], nil
}
}
}
}
// If all else fails, try the shell
stdout.Reset()
cmd := exec.Command("sh", "-c", "cd && pwd")
cmd.Stdout = &stdout
if err := cmd.Run(); err != nil {
return "", err
}
result := strings.TrimSpace(stdout.String())
if result == "" {
return "", errors.New("blank output when reading home directory")
}
return result, nil
}
func dirWindows() (string, error) {
// First prefer the HOME environmental variable
if home := os.Getenv("HOME"); home != "" {
return home, nil
}
// Prefer standard environment variable USERPROFILE
if home := os.Getenv("USERPROFILE"); home != "" {
return home, nil
}
drive := os.Getenv("HOMEDRIVE")
path := os.Getenv("HOMEPATH")
home := drive + path
if drive == "" || path == "" {
return "", errors.New("HOMEDRIVE, HOMEPATH, or USERPROFILE are blank")
}
return home, nil
}

View File

@@ -1,110 +0,0 @@
# Dingrobot
钉钉自定义机器人 Golang API.
支持的消息类型
- 文本类型
- link 类型
- markdown 类型
- 整体跳转 ActionCard 类型
## Installation
Install:
```sh
go get -u github.com/royeo/dingrobot
```
Import:
```go
import "github.com/royeo/dingrobot"
```
## Quick start
发送文本类型的消息
```go
func main() {
// You should replace the webhook here with your own.
webhook := "https://oapi.dingtalk.com/robot/send?access_token=xxx"
robot := dingrobot.NewRobot(webhook)
content := "我就是我, @1825718XXXX 是不一样的烟火"
atMobiles := []string{"1825718XXXX"}
isAtAll := false
err := robot.SendText(content, atMobiles, isAtAll)
if err != nil {
log.Fatal(err)
}
}
```
发送 link 类型的消息
```go
func main() {
// You should replace the webhook here with your own.
webhook := "https://oapi.dingtalk.com/robot/send?access_token=xxx"
robot := dingrobot.NewRobot(webhook)
title := "自定义机器人协议"
text := "群机器人是钉钉群的高级扩展功能。群机器人可以将第三方服务的信息聚合到群聊中实现自动化的信息同步。例如通过聚合GitHubGitLab等源码管理服务实现源码更新同步通过聚合TrelloJIRA等项目协调服务实现项目信息同步。不仅如此群机器人支持Webhook协议的自定义接入支持更多可能性例如你可将运维报警提醒通过自定义机器人聚合到钉钉群。"
messageUrl := "https://open-doc.dingtalk.com/docs/doc.htm?spm=a219a.7629140.0.0.Rqyvqo&treeId=257&articleId=105735&docType=1"
picUrl := ""
err := robot.SendLink(title, text, messageUrl, picUrl)
if err != nil {
log.Fatal(err)
}
}
```
发送 markdown 类型的消息
```go
func main() {
// You should replace the webhook here with your own.
webhook := "https://oapi.dingtalk.com/robot/send?access_token=xxx"
robot := dingrobot.NewRobot(webhook)
title := "杭州天气"
text := "#### 杭州天气  \n > 9度@1825718XXXX 西北风1级空气良89相对温度73%\n\n > ![screenshot](http://i01.lw.aliimg.com/media/lALPBbCc1ZhJGIvNAkzNBLA_1200_588.png)\n  > ###### 10点20分发布 [天气](http://www.thinkpage.cn/) "
atMobiles := []string{"1825718XXXX"}
isAtAll := false
err := robot.SendMarkdown(title, text, atMobiles, isAtAll)
if err != nil {
log.Fatal(err)
}
}
```
发送整体跳转 ActionCard 类型的消息
```go
func main() {
// You should replace the webhook here with your own.
webhook := "https://oapi.dingtalk.com/robot/send?access_token=xxx"
robot := dingrobot.NewRobot(webhook)
title := "乔布斯 20 年前想打造一间苹果咖啡厅,而它正是 Apple Store 的前身"
text := "![screenshot](@lADOpwk3K80C0M0FoA) \n #### 乔布斯 20 年前想打造的苹果咖啡厅 \n\n Apple Store 的设计正从原来满满的科技感走向生活化,而其生活化的走向其实可以追溯到 20 年前苹果一个建立咖啡馆的计划"
singleTitle := "阅读全文"
singleURL := "https://www.dingtalk.com/"
btnOrientation := "0"
hideAvatar := "0"
err := robot.SendActionCard(title, text, singleTitle, singleURL, btnOrientation, hideAvatar)
if err != nil {
log.Fatal(err)
}
}
```
## License
MIT Copyright (c) 2018 Royeo

View File

@@ -1,118 +0,0 @@
package dingrobot
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
)
// Roboter is the interface implemented by Robot that can send multiple types of messages.
type Roboter interface {
SendText(content string, atMobiles []string, isAtAll bool) error
SendLink(title, text, messageURL, picURL string) error
SendMarkdown(title, text string, atMobiles []string, isAtAll bool) error
SendActionCard(title, text, singleTitle, singleURL, btnOrientation, hideAvatar string) error
}
// Robot represents a dingtalk custom robot that can send messages to groups.
type Robot struct {
Webhook string
}
// NewRobot returns a roboter that can send messages.
func NewRobot(webhook string) Roboter {
return Robot{Webhook: webhook}
}
// SendText send a text type message.
func (r Robot) SendText(content string, atMobiles []string, isAtAll bool) error {
return r.send(&textMessage{
MsgType: msgTypeText,
Text: textParams{
Content: content,
},
At: atParams{
AtMobiles: atMobiles,
IsAtAll: isAtAll,
},
})
}
// SendLink send a link type message.
func (r Robot) SendLink(title, text, messageURL, picURL string) error {
return r.send(&linkMessage{
MsgType: msgTypeLink,
Link: linkParams{
Title: title,
Text: text,
MessageURL: messageURL,
PicURL: picURL,
},
})
}
// SendMarkdown send a markdown type message.
func (r Robot) SendMarkdown(title, text string, atMobiles []string, isAtAll bool) error {
return r.send(&markdownMessage{
MsgType: msgTypeMarkdown,
Markdown: markdownParams{
Title: title,
Text: text,
},
At: atParams{
AtMobiles: atMobiles,
IsAtAll: isAtAll,
},
})
}
// SendActionCard send a action card type message.
func (r Robot) SendActionCard(title, text, singleTitle, singleURL, btnOrientation, hideAvatar string) error {
return r.send(&actionCardMessage{
MsgType: msgTypeActionCard,
ActionCard: actionCardParams{
Title: title,
Text: text,
SingleTitle: singleTitle,
SingleURL: singleURL,
BtnOrientation: btnOrientation,
HideAvatar: hideAvatar,
},
})
}
type dingResponse struct {
Errcode int
Errmsg string
}
func (r Robot) send(msg interface{}) error {
m, err := json.Marshal(msg)
if err != nil {
return err
}
resp, err := http.Post(r.Webhook, "application/json", bytes.NewReader(m))
if err != nil {
return err
}
defer resp.Body.Close()
data, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
var dr dingResponse
err = json.Unmarshal(data, &dr)
if err != nil {
return err
}
if dr.Errcode != 0 {
return fmt.Errorf("dingrobot send failed: %v", dr.Errmsg)
}
return nil
}

View File

@@ -1,60 +0,0 @@
package dingrobot
const (
msgTypeText = "text"
msgTypeLink = "link"
msgTypeMarkdown = "markdown"
msgTypeActionCard = "actionCard"
)
type textMessage struct {
MsgType string `json:"msgtype"`
Text textParams `json:"text"`
At atParams `json:"at"`
}
type textParams struct {
Content string `json:"content"`
}
type atParams struct {
AtMobiles []string `json:"atMobiles,omitempty"`
IsAtAll bool `json:"isAtAll,omitempty"`
}
type linkMessage struct {
MsgType string `json:"msgtype"`
Link linkParams `json:"link"`
}
type linkParams struct {
Title string `json:"title"`
Text string `json:"text"`
MessageURL string `json:"messageUrl"`
PicURL string `json:"picUrl,omitempty"`
}
type markdownMessage struct {
MsgType string `json:"msgtype"`
Markdown markdownParams `json:"markdown"`
At atParams `json:"at"`
}
type markdownParams struct {
Title string `json:"title"`
Text string `json:"text"`
}
type actionCardMessage struct {
MsgType string `json:"msgtype"`
ActionCard actionCardParams `json:"actionCard"`
}
type actionCardParams struct {
Title string `json:"title"`
Text string `json:"text"`
SingleTitle string `json:"singleTitle"`
SingleURL string `json:"singleURL"`
BtnOrientation string `json:"btnOrientation,omitempty"`
HideAvatar string `json:"hideAvatar,omitempty"`
}

25
backend/vendor/github.com/sergi/go-diff/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,25 @@
# This is the official list of go-diff authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as
# Name or Organization <email address>
# The email address is not required for organizations.
# Please keep the list sorted.
Danny Yoo <dannyyoo@google.com>
James Kolb <jkolb@google.com>
Jonathan Amsterdam <jba@google.com>
Markus Zimmermann <markus.zimmermann@nethead.at> <markus.zimmermann@symflower.com> <zimmski@gmail.com>
Matt Kovars <akaskik@gmail.com>
Örjan Persson <orjan@spotify.com>
Osman Masood <oamasood@gmail.com>
Robert Carlsen <rwcarlsen@gmail.com>
Rory Flynn <roryflynn@users.noreply.github.com>
Sergi Mansilla <sergi.mansilla@gmail.com>
Shatrugna Sadhu <ssadhu@apcera.com>
Shawn Smith <shawnpsmith@gmail.com>
Stas Maksimov <maksimov@gmail.com>
Tor Arvid Lund <torarvid@gmail.com>
Zac Bergquist <zbergquist99@gmail.com>

32
backend/vendor/github.com/sergi/go-diff/CONTRIBUTORS generated vendored Normal file
View File

@@ -0,0 +1,32 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the go-diff
# repository.
#
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, ACME Inc. employees would be listed here
# but not in AUTHORS, because ACME Inc. would hold the copyright.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file.
#
# Names should be added to this file like so:
# Name <email address>
#
# Please keep the list sorted.
Danny Yoo <dannyyoo@google.com>
James Kolb <jkolb@google.com>
Jonathan Amsterdam <jba@google.com>
Markus Zimmermann <markus.zimmermann@nethead.at> <markus.zimmermann@symflower.com> <zimmski@gmail.com>
Matt Kovars <akaskik@gmail.com>
Örjan Persson <orjan@spotify.com>
Osman Masood <oamasood@gmail.com>
Robert Carlsen <rwcarlsen@gmail.com>
Rory Flynn <roryflynn@users.noreply.github.com>
Sergi Mansilla <sergi.mansilla@gmail.com>
Shatrugna Sadhu <ssadhu@apcera.com>
Shawn Smith <shawnpsmith@gmail.com>
Stas Maksimov <maksimov@gmail.com>
Tor Arvid Lund <torarvid@gmail.com>
Zac Bergquist <zbergquist99@gmail.com>

20
backend/vendor/github.com/sergi/go-diff/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,20 @@
Copyright (c) 2012-2016 The go-diff Authors. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,46 @@
// Copyright (c) 2012-2016 The go-diff authors. All rights reserved.
// https://github.com/sergi/go-diff
// See the included LICENSE file for license details.
//
// go-diff is a Go implementation of Google's Diff, Match, and Patch library
// Original library is Copyright (c) 2006 Google Inc.
// http://code.google.com/p/google-diff-match-patch/
// Package diffmatchpatch offers robust algorithms to perform the operations required for synchronizing plain text.
package diffmatchpatch
import (
"time"
)
// DiffMatchPatch holds the configuration for diff-match-patch operations.
type DiffMatchPatch struct {
// Number of seconds to map a diff before giving up (0 for infinity).
DiffTimeout time.Duration
// Cost of an empty edit operation in terms of edit characters.
DiffEditCost int
// How far to search for a match (0 = exact location, 1000+ = broad match). A match this many characters away from the expected location will add 1.0 to the score (0.0 is a perfect match).
MatchDistance int
// When deleting a large block of text (over ~64 characters), how close do the contents have to be to match the expected contents. (0.0 = perfection, 1.0 = very loose). Note that MatchThreshold controls how closely the end points of a delete need to match.
PatchDeleteThreshold float64
// Chunk size for context length.
PatchMargin int
// The number of bits in an int.
MatchMaxBits int
// At what point is no match declared (0.0 = perfection, 1.0 = very loose).
MatchThreshold float64
}
// New creates a new DiffMatchPatch object with default parameters.
func New() *DiffMatchPatch {
// Defaults.
return &DiffMatchPatch{
DiffTimeout: time.Second,
DiffEditCost: 4,
MatchThreshold: 0.5,
MatchDistance: 1000,
PatchDeleteThreshold: 0.5,
PatchMargin: 4,
MatchMaxBits: 32,
}
}

View File

@@ -0,0 +1,160 @@
// Copyright (c) 2012-2016 The go-diff authors. All rights reserved.
// https://github.com/sergi/go-diff
// See the included LICENSE file for license details.
//
// go-diff is a Go implementation of Google's Diff, Match, and Patch library
// Original library is Copyright (c) 2006 Google Inc.
// http://code.google.com/p/google-diff-match-patch/
package diffmatchpatch
import (
"math"
)
// MatchMain locates the best instance of 'pattern' in 'text' near 'loc'.
// Returns -1 if no match found.
func (dmp *DiffMatchPatch) MatchMain(text, pattern string, loc int) int {
// Check for null inputs not needed since null can't be passed in C#.
loc = int(math.Max(0, math.Min(float64(loc), float64(len(text)))))
if text == pattern {
// Shortcut (potentially not guaranteed by the algorithm)
return 0
} else if len(text) == 0 {
// Nothing to match.
return -1
} else if loc+len(pattern) <= len(text) && text[loc:loc+len(pattern)] == pattern {
// Perfect match at the perfect spot! (Includes case of null pattern)
return loc
}
// Do a fuzzy compare.
return dmp.MatchBitap(text, pattern, loc)
}
// MatchBitap locates the best instance of 'pattern' in 'text' near 'loc' using the Bitap algorithm.
// Returns -1 if no match was found.
func (dmp *DiffMatchPatch) MatchBitap(text, pattern string, loc int) int {
// Initialise the alphabet.
s := dmp.MatchAlphabet(pattern)
// Highest score beyond which we give up.
scoreThreshold := dmp.MatchThreshold
// Is there a nearby exact match? (speedup)
bestLoc := indexOf(text, pattern, loc)
if bestLoc != -1 {
scoreThreshold = math.Min(dmp.matchBitapScore(0, bestLoc, loc,
pattern), scoreThreshold)
// What about in the other direction? (speedup)
bestLoc = lastIndexOf(text, pattern, loc+len(pattern))
if bestLoc != -1 {
scoreThreshold = math.Min(dmp.matchBitapScore(0, bestLoc, loc,
pattern), scoreThreshold)
}
}
// Initialise the bit arrays.
matchmask := 1 << uint((len(pattern) - 1))
bestLoc = -1
var binMin, binMid int
binMax := len(pattern) + len(text)
lastRd := []int{}
for d := 0; d < len(pattern); d++ {
// Scan for the best match; each iteration allows for one more error. Run a binary search to determine how far from 'loc' we can stray at this error level.
binMin = 0
binMid = binMax
for binMin < binMid {
if dmp.matchBitapScore(d, loc+binMid, loc, pattern) <= scoreThreshold {
binMin = binMid
} else {
binMax = binMid
}
binMid = (binMax-binMin)/2 + binMin
}
// Use the result from this iteration as the maximum for the next.
binMax = binMid
start := int(math.Max(1, float64(loc-binMid+1)))
finish := int(math.Min(float64(loc+binMid), float64(len(text))) + float64(len(pattern)))
rd := make([]int, finish+2)
rd[finish+1] = (1 << uint(d)) - 1
for j := finish; j >= start; j-- {
var charMatch int
if len(text) <= j-1 {
// Out of range.
charMatch = 0
} else if _, ok := s[text[j-1]]; !ok {
charMatch = 0
} else {
charMatch = s[text[j-1]]
}
if d == 0 {
// First pass: exact match.
rd[j] = ((rd[j+1] << 1) | 1) & charMatch
} else {
// Subsequent passes: fuzzy match.
rd[j] = ((rd[j+1]<<1)|1)&charMatch | (((lastRd[j+1] | lastRd[j]) << 1) | 1) | lastRd[j+1]
}
if (rd[j] & matchmask) != 0 {
score := dmp.matchBitapScore(d, j-1, loc, pattern)
// This match will almost certainly be better than any existing match. But check anyway.
if score <= scoreThreshold {
// Told you so.
scoreThreshold = score
bestLoc = j - 1
if bestLoc > loc {
// When passing loc, don't exceed our current distance from loc.
start = int(math.Max(1, float64(2*loc-bestLoc)))
} else {
// Already passed loc, downhill from here on in.
break
}
}
}
}
if dmp.matchBitapScore(d+1, loc, loc, pattern) > scoreThreshold {
// No hope for a (better) match at greater error levels.
break
}
lastRd = rd
}
return bestLoc
}
// matchBitapScore computes and returns the score for a match with e errors and x location.
func (dmp *DiffMatchPatch) matchBitapScore(e, x, loc int, pattern string) float64 {
accuracy := float64(e) / float64(len(pattern))
proximity := math.Abs(float64(loc - x))
if dmp.MatchDistance == 0 {
// Dodge divide by zero error.
if proximity == 0 {
return accuracy
}
return 1.0
}
return accuracy + (proximity / float64(dmp.MatchDistance))
}
// MatchAlphabet initialises the alphabet for the Bitap algorithm.
func (dmp *DiffMatchPatch) MatchAlphabet(pattern string) map[byte]int {
s := map[byte]int{}
charPattern := []byte(pattern)
for _, c := range charPattern {
_, ok := s[c]
if !ok {
s[c] = 0
}
}
i := 0
for _, c := range charPattern {
value := s[c] | int(uint(1)<<uint((len(pattern)-i-1)))
s[c] = value
i++
}
return s
}

View File

@@ -0,0 +1,23 @@
// Copyright (c) 2012-2016 The go-diff authors. All rights reserved.
// https://github.com/sergi/go-diff
// See the included LICENSE file for license details.
//
// go-diff is a Go implementation of Google's Diff, Match, and Patch library
// Original library is Copyright (c) 2006 Google Inc.
// http://code.google.com/p/google-diff-match-patch/
package diffmatchpatch
func min(x, y int) int {
if x < y {
return x
}
return y
}
func max(x, y int) int {
if x > y {
return x
}
return y
}

View File

@@ -0,0 +1,556 @@
// Copyright (c) 2012-2016 The go-diff authors. All rights reserved.
// https://github.com/sergi/go-diff
// See the included LICENSE file for license details.
//
// go-diff is a Go implementation of Google's Diff, Match, and Patch library
// Original library is Copyright (c) 2006 Google Inc.
// http://code.google.com/p/google-diff-match-patch/
package diffmatchpatch
import (
"bytes"
"errors"
"math"
"net/url"
"regexp"
"strconv"
"strings"
)
// Patch represents one patch operation.
type Patch struct {
diffs []Diff
Start1 int
Start2 int
Length1 int
Length2 int
}
// String emulates GNU diff's format.
// Header: @@ -382,8 +481,9 @@
// Indices are printed as 1-based, not 0-based.
func (p *Patch) String() string {
var coords1, coords2 string
if p.Length1 == 0 {
coords1 = strconv.Itoa(p.Start1) + ",0"
} else if p.Length1 == 1 {
coords1 = strconv.Itoa(p.Start1 + 1)
} else {
coords1 = strconv.Itoa(p.Start1+1) + "," + strconv.Itoa(p.Length1)
}
if p.Length2 == 0 {
coords2 = strconv.Itoa(p.Start2) + ",0"
} else if p.Length2 == 1 {
coords2 = strconv.Itoa(p.Start2 + 1)
} else {
coords2 = strconv.Itoa(p.Start2+1) + "," + strconv.Itoa(p.Length2)
}
var text bytes.Buffer
_, _ = text.WriteString("@@ -" + coords1 + " +" + coords2 + " @@\n")
// Escape the body of the patch with %xx notation.
for _, aDiff := range p.diffs {
switch aDiff.Type {
case DiffInsert:
_, _ = text.WriteString("+")
case DiffDelete:
_, _ = text.WriteString("-")
case DiffEqual:
_, _ = text.WriteString(" ")
}
_, _ = text.WriteString(strings.Replace(url.QueryEscape(aDiff.Text), "+", " ", -1))
_, _ = text.WriteString("\n")
}
return unescaper.Replace(text.String())
}
// PatchAddContext increases the context until it is unique, but doesn't let the pattern expand beyond MatchMaxBits.
func (dmp *DiffMatchPatch) PatchAddContext(patch Patch, text string) Patch {
if len(text) == 0 {
return patch
}
pattern := text[patch.Start2 : patch.Start2+patch.Length1]
padding := 0
// Look for the first and last matches of pattern in text. If two different matches are found, increase the pattern length.
for strings.Index(text, pattern) != strings.LastIndex(text, pattern) &&
len(pattern) < dmp.MatchMaxBits-2*dmp.PatchMargin {
padding += dmp.PatchMargin
maxStart := max(0, patch.Start2-padding)
minEnd := min(len(text), patch.Start2+patch.Length1+padding)
pattern = text[maxStart:minEnd]
}
// Add one chunk for good luck.
padding += dmp.PatchMargin
// Add the prefix.
prefix := text[max(0, patch.Start2-padding):patch.Start2]
if len(prefix) != 0 {
patch.diffs = append([]Diff{Diff{DiffEqual, prefix}}, patch.diffs...)
}
// Add the suffix.
suffix := text[patch.Start2+patch.Length1 : min(len(text), patch.Start2+patch.Length1+padding)]
if len(suffix) != 0 {
patch.diffs = append(patch.diffs, Diff{DiffEqual, suffix})
}
// Roll back the start points.
patch.Start1 -= len(prefix)
patch.Start2 -= len(prefix)
// Extend the lengths.
patch.Length1 += len(prefix) + len(suffix)
patch.Length2 += len(prefix) + len(suffix)
return patch
}
// PatchMake computes a list of patches.
func (dmp *DiffMatchPatch) PatchMake(opt ...interface{}) []Patch {
if len(opt) == 1 {
diffs, _ := opt[0].([]Diff)
text1 := dmp.DiffText1(diffs)
return dmp.PatchMake(text1, diffs)
} else if len(opt) == 2 {
text1 := opt[0].(string)
switch t := opt[1].(type) {
case string:
diffs := dmp.DiffMain(text1, t, true)
if len(diffs) > 2 {
diffs = dmp.DiffCleanupSemantic(diffs)
diffs = dmp.DiffCleanupEfficiency(diffs)
}
return dmp.PatchMake(text1, diffs)
case []Diff:
return dmp.patchMake2(text1, t)
}
} else if len(opt) == 3 {
return dmp.PatchMake(opt[0], opt[2])
}
return []Patch{}
}
// patchMake2 computes a list of patches to turn text1 into text2.
// text2 is not provided, diffs are the delta between text1 and text2.
func (dmp *DiffMatchPatch) patchMake2(text1 string, diffs []Diff) []Patch {
// Check for null inputs not needed since null can't be passed in C#.
patches := []Patch{}
if len(diffs) == 0 {
return patches // Get rid of the null case.
}
patch := Patch{}
charCount1 := 0 // Number of characters into the text1 string.
charCount2 := 0 // Number of characters into the text2 string.
// Start with text1 (prepatchText) and apply the diffs until we arrive at text2 (postpatchText). We recreate the patches one by one to determine context info.
prepatchText := text1
postpatchText := text1
for i, aDiff := range diffs {
if len(patch.diffs) == 0 && aDiff.Type != DiffEqual {
// A new patch starts here.
patch.Start1 = charCount1
patch.Start2 = charCount2
}
switch aDiff.Type {
case DiffInsert:
patch.diffs = append(patch.diffs, aDiff)
patch.Length2 += len(aDiff.Text)
postpatchText = postpatchText[:charCount2] +
aDiff.Text + postpatchText[charCount2:]
case DiffDelete:
patch.Length1 += len(aDiff.Text)
patch.diffs = append(patch.diffs, aDiff)
postpatchText = postpatchText[:charCount2] + postpatchText[charCount2+len(aDiff.Text):]
case DiffEqual:
if len(aDiff.Text) <= 2*dmp.PatchMargin &&
len(patch.diffs) != 0 && i != len(diffs)-1 {
// Small equality inside a patch.
patch.diffs = append(patch.diffs, aDiff)
patch.Length1 += len(aDiff.Text)
patch.Length2 += len(aDiff.Text)
}
if len(aDiff.Text) >= 2*dmp.PatchMargin {
// Time for a new patch.
if len(patch.diffs) != 0 {
patch = dmp.PatchAddContext(patch, prepatchText)
patches = append(patches, patch)
patch = Patch{}
// Unlike Unidiff, our patch lists have a rolling context. http://code.google.com/p/google-diff-match-patch/wiki/Unidiff Update prepatch text & pos to reflect the application of the just completed patch.
prepatchText = postpatchText
charCount1 = charCount2
}
}
}
// Update the current character count.
if aDiff.Type != DiffInsert {
charCount1 += len(aDiff.Text)
}
if aDiff.Type != DiffDelete {
charCount2 += len(aDiff.Text)
}
}
// Pick up the leftover patch if not empty.
if len(patch.diffs) != 0 {
patch = dmp.PatchAddContext(patch, prepatchText)
patches = append(patches, patch)
}
return patches
}
// PatchDeepCopy returns an array that is identical to a given an array of patches.
func (dmp *DiffMatchPatch) PatchDeepCopy(patches []Patch) []Patch {
patchesCopy := []Patch{}
for _, aPatch := range patches {
patchCopy := Patch{}
for _, aDiff := range aPatch.diffs {
patchCopy.diffs = append(patchCopy.diffs, Diff{
aDiff.Type,
aDiff.Text,
})
}
patchCopy.Start1 = aPatch.Start1
patchCopy.Start2 = aPatch.Start2
patchCopy.Length1 = aPatch.Length1
patchCopy.Length2 = aPatch.Length2
patchesCopy = append(patchesCopy, patchCopy)
}
return patchesCopy
}
// PatchApply merges a set of patches onto the text. Returns a patched text, as well as an array of true/false values indicating which patches were applied.
func (dmp *DiffMatchPatch) PatchApply(patches []Patch, text string) (string, []bool) {
if len(patches) == 0 {
return text, []bool{}
}
// Deep copy the patches so that no changes are made to originals.
patches = dmp.PatchDeepCopy(patches)
nullPadding := dmp.PatchAddPadding(patches)
text = nullPadding + text + nullPadding
patches = dmp.PatchSplitMax(patches)
x := 0
// delta keeps track of the offset between the expected and actual location of the previous patch. If there are patches expected at positions 10 and 20, but the first patch was found at 12, delta is 2 and the second patch has an effective expected position of 22.
delta := 0
results := make([]bool, len(patches))
for _, aPatch := range patches {
expectedLoc := aPatch.Start2 + delta
text1 := dmp.DiffText1(aPatch.diffs)
var startLoc int
endLoc := -1
if len(text1) > dmp.MatchMaxBits {
// PatchSplitMax will only provide an oversized pattern in the case of a monster delete.
startLoc = dmp.MatchMain(text, text1[:dmp.MatchMaxBits], expectedLoc)
if startLoc != -1 {
endLoc = dmp.MatchMain(text,
text1[len(text1)-dmp.MatchMaxBits:], expectedLoc+len(text1)-dmp.MatchMaxBits)
if endLoc == -1 || startLoc >= endLoc {
// Can't find valid trailing context. Drop this patch.
startLoc = -1
}
}
} else {
startLoc = dmp.MatchMain(text, text1, expectedLoc)
}
if startLoc == -1 {
// No match found. :(
results[x] = false
// Subtract the delta for this failed patch from subsequent patches.
delta -= aPatch.Length2 - aPatch.Length1
} else {
// Found a match. :)
results[x] = true
delta = startLoc - expectedLoc
var text2 string
if endLoc == -1 {
text2 = text[startLoc:int(math.Min(float64(startLoc+len(text1)), float64(len(text))))]
} else {
text2 = text[startLoc:int(math.Min(float64(endLoc+dmp.MatchMaxBits), float64(len(text))))]
}
if text1 == text2 {
// Perfect match, just shove the Replacement text in.
text = text[:startLoc] + dmp.DiffText2(aPatch.diffs) + text[startLoc+len(text1):]
} else {
// Imperfect match. Run a diff to get a framework of equivalent indices.
diffs := dmp.DiffMain(text1, text2, false)
if len(text1) > dmp.MatchMaxBits && float64(dmp.DiffLevenshtein(diffs))/float64(len(text1)) > dmp.PatchDeleteThreshold {
// The end points match, but the content is unacceptably bad.
results[x] = false
} else {
diffs = dmp.DiffCleanupSemanticLossless(diffs)
index1 := 0
for _, aDiff := range aPatch.diffs {
if aDiff.Type != DiffEqual {
index2 := dmp.DiffXIndex(diffs, index1)
if aDiff.Type == DiffInsert {
// Insertion
text = text[:startLoc+index2] + aDiff.Text + text[startLoc+index2:]
} else if aDiff.Type == DiffDelete {
// Deletion
startIndex := startLoc + index2
text = text[:startIndex] +
text[startIndex+dmp.DiffXIndex(diffs, index1+len(aDiff.Text))-index2:]
}
}
if aDiff.Type != DiffDelete {
index1 += len(aDiff.Text)
}
}
}
}
}
x++
}
// Strip the padding off.
text = text[len(nullPadding) : len(nullPadding)+(len(text)-2*len(nullPadding))]
return text, results
}
// PatchAddPadding adds some padding on text start and end so that edges can match something.
// Intended to be called only from within patchApply.
func (dmp *DiffMatchPatch) PatchAddPadding(patches []Patch) string {
paddingLength := dmp.PatchMargin
nullPadding := ""
for x := 1; x <= paddingLength; x++ {
nullPadding += string(x)
}
// Bump all the patches forward.
for i := range patches {
patches[i].Start1 += paddingLength
patches[i].Start2 += paddingLength
}
// Add some padding on start of first diff.
if len(patches[0].diffs) == 0 || patches[0].diffs[0].Type != DiffEqual {
// Add nullPadding equality.
patches[0].diffs = append([]Diff{Diff{DiffEqual, nullPadding}}, patches[0].diffs...)
patches[0].Start1 -= paddingLength // Should be 0.
patches[0].Start2 -= paddingLength // Should be 0.
patches[0].Length1 += paddingLength
patches[0].Length2 += paddingLength
} else if paddingLength > len(patches[0].diffs[0].Text) {
// Grow first equality.
extraLength := paddingLength - len(patches[0].diffs[0].Text)
patches[0].diffs[0].Text = nullPadding[len(patches[0].diffs[0].Text):] + patches[0].diffs[0].Text
patches[0].Start1 -= extraLength
patches[0].Start2 -= extraLength
patches[0].Length1 += extraLength
patches[0].Length2 += extraLength
}
// Add some padding on end of last diff.
last := len(patches) - 1
if len(patches[last].diffs) == 0 || patches[last].diffs[len(patches[last].diffs)-1].Type != DiffEqual {
// Add nullPadding equality.
patches[last].diffs = append(patches[last].diffs, Diff{DiffEqual, nullPadding})
patches[last].Length1 += paddingLength
patches[last].Length2 += paddingLength
} else if paddingLength > len(patches[last].diffs[len(patches[last].diffs)-1].Text) {
// Grow last equality.
lastDiff := patches[last].diffs[len(patches[last].diffs)-1]
extraLength := paddingLength - len(lastDiff.Text)
patches[last].diffs[len(patches[last].diffs)-1].Text += nullPadding[:extraLength]
patches[last].Length1 += extraLength
patches[last].Length2 += extraLength
}
return nullPadding
}
// PatchSplitMax looks through the patches and breaks up any which are longer than the maximum limit of the match algorithm.
// Intended to be called only from within patchApply.
func (dmp *DiffMatchPatch) PatchSplitMax(patches []Patch) []Patch {
patchSize := dmp.MatchMaxBits
for x := 0; x < len(patches); x++ {
if patches[x].Length1 <= patchSize {
continue
}
bigpatch := patches[x]
// Remove the big old patch.
patches = append(patches[:x], patches[x+1:]...)
x--
Start1 := bigpatch.Start1
Start2 := bigpatch.Start2
precontext := ""
for len(bigpatch.diffs) != 0 {
// Create one of several smaller patches.
patch := Patch{}
empty := true
patch.Start1 = Start1 - len(precontext)
patch.Start2 = Start2 - len(precontext)
if len(precontext) != 0 {
patch.Length1 = len(precontext)
patch.Length2 = len(precontext)
patch.diffs = append(patch.diffs, Diff{DiffEqual, precontext})
}
for len(bigpatch.diffs) != 0 && patch.Length1 < patchSize-dmp.PatchMargin {
diffType := bigpatch.diffs[0].Type
diffText := bigpatch.diffs[0].Text
if diffType == DiffInsert {
// Insertions are harmless.
patch.Length2 += len(diffText)
Start2 += len(diffText)
patch.diffs = append(patch.diffs, bigpatch.diffs[0])
bigpatch.diffs = bigpatch.diffs[1:]
empty = false
} else if diffType == DiffDelete && len(patch.diffs) == 1 && patch.diffs[0].Type == DiffEqual && len(diffText) > 2*patchSize {
// This is a large deletion. Let it pass in one chunk.
patch.Length1 += len(diffText)
Start1 += len(diffText)
empty = false
patch.diffs = append(patch.diffs, Diff{diffType, diffText})
bigpatch.diffs = bigpatch.diffs[1:]
} else {
// Deletion or equality. Only take as much as we can stomach.
diffText = diffText[:min(len(diffText), patchSize-patch.Length1-dmp.PatchMargin)]
patch.Length1 += len(diffText)
Start1 += len(diffText)
if diffType == DiffEqual {
patch.Length2 += len(diffText)
Start2 += len(diffText)
} else {
empty = false
}
patch.diffs = append(patch.diffs, Diff{diffType, diffText})
if diffText == bigpatch.diffs[0].Text {
bigpatch.diffs = bigpatch.diffs[1:]
} else {
bigpatch.diffs[0].Text =
bigpatch.diffs[0].Text[len(diffText):]
}
}
}
// Compute the head context for the next patch.
precontext = dmp.DiffText2(patch.diffs)
precontext = precontext[max(0, len(precontext)-dmp.PatchMargin):]
postcontext := ""
// Append the end context for this patch.
if len(dmp.DiffText1(bigpatch.diffs)) > dmp.PatchMargin {
postcontext = dmp.DiffText1(bigpatch.diffs)[:dmp.PatchMargin]
} else {
postcontext = dmp.DiffText1(bigpatch.diffs)
}
if len(postcontext) != 0 {
patch.Length1 += len(postcontext)
patch.Length2 += len(postcontext)
if len(patch.diffs) != 0 && patch.diffs[len(patch.diffs)-1].Type == DiffEqual {
patch.diffs[len(patch.diffs)-1].Text += postcontext
} else {
patch.diffs = append(patch.diffs, Diff{DiffEqual, postcontext})
}
}
if !empty {
x++
patches = append(patches[:x], append([]Patch{patch}, patches[x:]...)...)
}
}
}
return patches
}
// PatchToText takes a list of patches and returns a textual representation.
func (dmp *DiffMatchPatch) PatchToText(patches []Patch) string {
var text bytes.Buffer
for _, aPatch := range patches {
_, _ = text.WriteString(aPatch.String())
}
return text.String()
}
// PatchFromText parses a textual representation of patches and returns a List of Patch objects.
func (dmp *DiffMatchPatch) PatchFromText(textline string) ([]Patch, error) {
patches := []Patch{}
if len(textline) == 0 {
return patches, nil
}
text := strings.Split(textline, "\n")
textPointer := 0
patchHeader := regexp.MustCompile("^@@ -(\\d+),?(\\d*) \\+(\\d+),?(\\d*) @@$")
var patch Patch
var sign uint8
var line string
for textPointer < len(text) {
if !patchHeader.MatchString(text[textPointer]) {
return patches, errors.New("Invalid patch string: " + text[textPointer])
}
patch = Patch{}
m := patchHeader.FindStringSubmatch(text[textPointer])
patch.Start1, _ = strconv.Atoi(m[1])
if len(m[2]) == 0 {
patch.Start1--
patch.Length1 = 1
} else if m[2] == "0" {
patch.Length1 = 0
} else {
patch.Start1--
patch.Length1, _ = strconv.Atoi(m[2])
}
patch.Start2, _ = strconv.Atoi(m[3])
if len(m[4]) == 0 {
patch.Start2--
patch.Length2 = 1
} else if m[4] == "0" {
patch.Length2 = 0
} else {
patch.Start2--
patch.Length2, _ = strconv.Atoi(m[4])
}
textPointer++
for textPointer < len(text) {
if len(text[textPointer]) > 0 {
sign = text[textPointer][0]
} else {
textPointer++
continue
}
line = text[textPointer][1:]
line = strings.Replace(line, "+", "%2b", -1)
line, _ = url.QueryUnescape(line)
if sign == '-' {
// Deletion.
patch.diffs = append(patch.diffs, Diff{DiffDelete, line})
} else if sign == '+' {
// Insertion.
patch.diffs = append(patch.diffs, Diff{DiffInsert, line})
} else if sign == ' ' {
// Minor equality.
patch.diffs = append(patch.diffs, Diff{DiffEqual, line})
} else if sign == '@' {
// Start of next patch.
break
} else {
// WTF?
return patches, errors.New("Invalid patch mode '" + string(sign) + "' in: " + string(line))
}
textPointer++
}
patches = append(patches, patch)
}
return patches, nil
}

View File

@@ -0,0 +1,88 @@
// Copyright (c) 2012-2016 The go-diff authors. All rights reserved.
// https://github.com/sergi/go-diff
// See the included LICENSE file for license details.
//
// go-diff is a Go implementation of Google's Diff, Match, and Patch library
// Original library is Copyright (c) 2006 Google Inc.
// http://code.google.com/p/google-diff-match-patch/
package diffmatchpatch
import (
"strings"
"unicode/utf8"
)
// unescaper unescapes selected chars for compatibility with JavaScript's encodeURI.
// In speed critical applications this could be dropped since the receiving application will certainly decode these fine. Note that this function is case-sensitive. Thus "%3F" would not be unescaped. But this is ok because it is only called with the output of HttpUtility.UrlEncode which returns lowercase hex. Example: "%3f" -> "?", "%24" -> "$", etc.
var unescaper = strings.NewReplacer(
"%21", "!", "%7E", "~", "%27", "'",
"%28", "(", "%29", ")", "%3B", ";",
"%2F", "/", "%3F", "?", "%3A", ":",
"%40", "@", "%26", "&", "%3D", "=",
"%2B", "+", "%24", "$", "%2C", ",", "%23", "#", "%2A", "*")
// indexOf returns the first index of pattern in str, starting at str[i].
func indexOf(str string, pattern string, i int) int {
if i > len(str)-1 {
return -1
}
if i <= 0 {
return strings.Index(str, pattern)
}
ind := strings.Index(str[i:], pattern)
if ind == -1 {
return -1
}
return ind + i
}
// lastIndexOf returns the last index of pattern in str, starting at str[i].
func lastIndexOf(str string, pattern string, i int) int {
if i < 0 {
return -1
}
if i >= len(str) {
return strings.LastIndex(str, pattern)
}
_, size := utf8.DecodeRuneInString(str[i:])
return strings.LastIndex(str[:i+size], pattern)
}
// runesIndexOf returns the index of pattern in target, starting at target[i].
func runesIndexOf(target, pattern []rune, i int) int {
if i > len(target)-1 {
return -1
}
if i <= 0 {
return runesIndex(target, pattern)
}
ind := runesIndex(target[i:], pattern)
if ind == -1 {
return -1
}
return ind + i
}
func runesEqual(r1, r2 []rune) bool {
if len(r1) != len(r2) {
return false
}
for i, c := range r1 {
if c != r2[i] {
return false
}
}
return true
}
// runesIndex is the equivalent of strings.Index for rune slices.
func runesIndex(r1, r2 []rune) int {
last := len(r1) - len(r2)
for i := 0; i <= last; i++ {
if runesEqual(r1[i:i+len(r2)], r2) {
return i
}
}
return -1
}

28
backend/vendor/github.com/src-d/gcfg/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,28 @@
Copyright (c) 2012 Péter Surányi. Portions Copyright (c) 2009 The Go
Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

4
backend/vendor/github.com/src-d/gcfg/README generated vendored Normal file
View File

@@ -0,0 +1,4 @@
Gcfg reads INI-style configuration files into Go structs;
supports user-defined types and subsections.
Package docs: https://godoc.org/gopkg.in/gcfg.v1

145
backend/vendor/github.com/src-d/gcfg/doc.go generated vendored Normal file
View File

@@ -0,0 +1,145 @@
// Package gcfg reads "INI-style" text-based configuration files with
// "name=value" pairs grouped into sections (gcfg files).
//
// This package is still a work in progress; see the sections below for planned
// changes.
//
// Syntax
//
// The syntax is based on that used by git config:
// http://git-scm.com/docs/git-config#_syntax .
// There are some (planned) differences compared to the git config format:
// - improve data portability:
// - must be encoded in UTF-8 (for now) and must not contain the 0 byte
// - include and "path" type is not supported
// (path type may be implementable as a user-defined type)
// - internationalization
// - section and variable names can contain unicode letters, unicode digits
// (as defined in http://golang.org/ref/spec#Characters ) and hyphens
// (U+002D), starting with a unicode letter
// - disallow potentially ambiguous or misleading definitions:
// - `[sec.sub]` format is not allowed (deprecated in gitconfig)
// - `[sec ""]` is not allowed
// - use `[sec]` for section name "sec" and empty subsection name
// - (planned) within a single file, definitions must be contiguous for each:
// - section: '[secA]' -> '[secB]' -> '[secA]' is an error
// - subsection: '[sec "A"]' -> '[sec "B"]' -> '[sec "A"]' is an error
// - multivalued variable: 'multi=a' -> 'other=x' -> 'multi=b' is an error
//
// Data structure
//
// The functions in this package read values into a user-defined struct.
// Each section corresponds to a struct field in the config struct, and each
// variable in a section corresponds to a data field in the section struct.
// The mapping of each section or variable name to fields is done either based
// on the "gcfg" struct tag or by matching the name of the section or variable,
// ignoring case. In the latter case, hyphens '-' in section and variable names
// correspond to underscores '_' in field names.
// Fields must be exported; to use a section or variable name starting with a
// letter that is neither upper- or lower-case, prefix the field name with 'X'.
// (See https://code.google.com/p/go/issues/detail?id=5763#c4 .)
//
// For sections with subsections, the corresponding field in config must be a
// map, rather than a struct, with string keys and pointer-to-struct values.
// Values for subsection variables are stored in the map with the subsection
// name used as the map key.
// (Note that unlike section and variable names, subsection names are case
// sensitive.)
// When using a map, and there is a section with the same section name but
// without a subsection name, its values are stored with the empty string used
// as the key.
// It is possible to provide default values for subsections in the section
// "default-<sectionname>" (or by setting values in the corresponding struct
// field "Default_<sectionname>").
//
// The functions in this package panic if config is not a pointer to a struct,
// or when a field is not of a suitable type (either a struct or a map with
// string keys and pointer-to-struct values).
//
// Parsing of values
//
// The section structs in the config struct may contain single-valued or
// multi-valued variables. Variables of unnamed slice type (that is, a type
// starting with `[]`) are treated as multi-value; all others (including named
// slice types) are treated as single-valued variables.
//
// Single-valued variables are handled based on the type as follows.
// Unnamed pointer types (that is, types starting with `*`) are dereferenced,
// and if necessary, a new instance is allocated.
//
// For types implementing the encoding.TextUnmarshaler interface, the
// UnmarshalText method is used to set the value. Implementing this method is
// the recommended way for parsing user-defined types.
//
// For fields of string kind, the value string is assigned to the field, after
// unquoting and unescaping as needed.
// For fields of bool kind, the field is set to true if the value is "true",
// "yes", "on" or "1", and set to false if the value is "false", "no", "off" or
// "0", ignoring case. In addition, single-valued bool fields can be specified
// with a "blank" value (variable name without equals sign and value); in such
// case the value is set to true.
//
// Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as
// decimal or hexadecimal (if having '0x' prefix). (This is to prevent
// unintuitively handling zero-padded numbers as octal.) Other types having
// [u]int* as the underlying type, such as os.FileMode and uintptr allow
// decimal, hexadecimal, or octal values.
// Parsing mode for integer types can be overridden using the struct tag option
// ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters
// (each standing for decimal, hexadecimal, and octal, respectively.)
//
// All other types are parsed using fmt.Sscanf with the "%v" verb.
//
// For multi-valued variables, each individual value is parsed as above and
// appended to the slice. If the first value is specified as a "blank" value
// (variable name without equals sign and value), a new slice is allocated;
// that is any values previously set in the slice will be ignored.
//
// The types subpackage for provides helpers for parsing "enum-like" and integer
// types.
//
// Error handling
//
// There are 3 types of errors:
//
// - programmer errors / panics:
// - invalid configuration structure
// - data errors:
// - fatal errors:
// - invalid configuration syntax
// - warnings:
// - data that doesn't belong to any part of the config structure
//
// Programmer errors trigger panics. These are should be fixed by the programmer
// before releasing code that uses gcfg.
//
// Data errors cause gcfg to return a non-nil error value. This includes the
// case when there are extra unknown key-value definitions in the configuration
// data (extra data).
// However, in some occasions it is desirable to be able to proceed in
// situations when the only data error is that of extra data.
// These errors are handled at a different (warning) priority and can be
// filtered out programmatically. To ignore extra data warnings, wrap the
// gcfg.Read*Into invocation into a call to gcfg.FatalOnly.
//
// TODO
//
// The following is a list of changes under consideration:
// - documentation
// - self-contained syntax documentation
// - more practical examples
// - move TODOs to issue tracker (eventually)
// - syntax
// - reconsider valid escape sequences
// (gitconfig doesn't support \r in value, \t in subsection name, etc.)
// - reading / parsing gcfg files
// - define internal representation structure
// - support multiple inputs (readers, strings, files)
// - support declaring encoding (?)
// - support varying fields sets for subsections (?)
// - writing gcfg files
// - error handling
// - make error context accessible programmatically?
// - limit input size?
//
package gcfg // import "github.com/src-d/gcfg"

41
backend/vendor/github.com/src-d/gcfg/errors.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
package gcfg
import (
"gopkg.in/warnings.v0"
)
// FatalOnly filters the results of a Read*Into invocation and returns only
// fatal errors. That is, errors (warnings) indicating data for unknown
// sections / variables is ignored. Example invocation:
//
// err := gcfg.FatalOnly(gcfg.ReadFileInto(&cfg, configFile))
// if err != nil {
// ...
//
func FatalOnly(err error) error {
return warnings.FatalOnly(err)
}
func isFatal(err error) bool {
_, ok := err.(extraData)
return !ok
}
type extraData struct {
section string
subsection *string
variable *string
}
func (e extraData) Error() string {
s := "can't store data at section \"" + e.section + "\""
if e.subsection != nil {
s += ", subsection \"" + *e.subsection + "\""
}
if e.variable != nil {
s += ", variable \"" + *e.variable + "\""
}
return s
}
var _ error = extraData{}

7
backend/vendor/github.com/src-d/gcfg/go1_0.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
// +build !go1.2
package gcfg
type textUnmarshaler interface {
UnmarshalText(text []byte) error
}

9
backend/vendor/github.com/src-d/gcfg/go1_2.go generated vendored Normal file
View File

@@ -0,0 +1,9 @@
// +build go1.2
package gcfg
import (
"encoding"
)
type textUnmarshaler encoding.TextUnmarshaler

273
backend/vendor/github.com/src-d/gcfg/read.go generated vendored Normal file
View File

@@ -0,0 +1,273 @@
package gcfg
import (
"fmt"
"io"
"io/ioutil"
"os"
"strings"
"github.com/src-d/gcfg/scanner"
"github.com/src-d/gcfg/token"
"gopkg.in/warnings.v0"
)
var unescape = map[rune]rune{'\\': '\\', '"': '"', 'n': '\n', 't': '\t', 'b': '\b'}
// no error: invalid literals should be caught by scanner
func unquote(s string) string {
u, q, esc := make([]rune, 0, len(s)), false, false
for _, c := range s {
if esc {
uc, ok := unescape[c]
switch {
case ok:
u = append(u, uc)
fallthrough
case !q && c == '\n':
esc = false
continue
}
panic("invalid escape sequence")
}
switch c {
case '"':
q = !q
case '\\':
esc = true
default:
u = append(u, c)
}
}
if q {
panic("missing end quote")
}
if esc {
panic("invalid escape sequence")
}
return string(u)
}
func read(c *warnings.Collector, callback func(string, string, string, string, bool) error,
fset *token.FileSet, file *token.File, src []byte) error {
//
var s scanner.Scanner
var errs scanner.ErrorList
s.Init(file, src, func(p token.Position, m string) { errs.Add(p, m) }, 0)
sect, sectsub := "", ""
pos, tok, lit := s.Scan()
errfn := func(msg string) error {
return fmt.Errorf("%s: %s", fset.Position(pos), msg)
}
for {
if errs.Len() > 0 {
if err := c.Collect(errs.Err()); err != nil {
return err
}
}
switch tok {
case token.EOF:
return nil
case token.EOL, token.COMMENT:
pos, tok, lit = s.Scan()
case token.LBRACK:
pos, tok, lit = s.Scan()
if errs.Len() > 0 {
if err := c.Collect(errs.Err()); err != nil {
return err
}
}
if tok != token.IDENT {
if err := c.Collect(errfn("expected section name")); err != nil {
return err
}
}
sect, sectsub = lit, ""
pos, tok, lit = s.Scan()
if errs.Len() > 0 {
if err := c.Collect(errs.Err()); err != nil {
return err
}
}
if tok == token.STRING {
sectsub = unquote(lit)
if sectsub == "" {
if err := c.Collect(errfn("empty subsection name")); err != nil {
return err
}
}
pos, tok, lit = s.Scan()
if errs.Len() > 0 {
if err := c.Collect(errs.Err()); err != nil {
return err
}
}
}
if tok != token.RBRACK {
if sectsub == "" {
if err := c.Collect(errfn("expected subsection name or right bracket")); err != nil {
return err
}
}
if err := c.Collect(errfn("expected right bracket")); err != nil {
return err
}
}
pos, tok, lit = s.Scan()
if tok != token.EOL && tok != token.EOF && tok != token.COMMENT {
if err := c.Collect(errfn("expected EOL, EOF, or comment")); err != nil {
return err
}
}
// If a section/subsection header was found, ensure a
// container object is created, even if there are no
// variables further down.
err := c.Collect(callback(sect, sectsub, "", "", true))
if err != nil {
return err
}
case token.IDENT:
if sect == "" {
if err := c.Collect(errfn("expected section header")); err != nil {
return err
}
}
n := lit
pos, tok, lit = s.Scan()
if errs.Len() > 0 {
return errs.Err()
}
blank, v := tok == token.EOF || tok == token.EOL || tok == token.COMMENT, ""
if !blank {
if tok != token.ASSIGN {
if err := c.Collect(errfn("expected '='")); err != nil {
return err
}
}
pos, tok, lit = s.Scan()
if errs.Len() > 0 {
if err := c.Collect(errs.Err()); err != nil {
return err
}
}
if tok != token.STRING {
if err := c.Collect(errfn("expected value")); err != nil {
return err
}
}
v = unquote(lit)
pos, tok, lit = s.Scan()
if errs.Len() > 0 {
if err := c.Collect(errs.Err()); err != nil {
return err
}
}
if tok != token.EOL && tok != token.EOF && tok != token.COMMENT {
if err := c.Collect(errfn("expected EOL, EOF, or comment")); err != nil {
return err
}
}
}
err := c.Collect(callback(sect, sectsub, n, v, blank))
if err != nil {
return err
}
default:
if sect == "" {
if err := c.Collect(errfn("expected section header")); err != nil {
return err
}
}
if err := c.Collect(errfn("expected section header or variable declaration")); err != nil {
return err
}
}
}
panic("never reached")
}
func readInto(config interface{}, fset *token.FileSet, file *token.File,
src []byte) error {
//
c := warnings.NewCollector(isFatal)
firstPassCallback := func(s string, ss string, k string, v string, bv bool) error {
return set(c, config, s, ss, k, v, bv, false)
}
err := read(c, firstPassCallback, fset, file, src)
if err != nil {
return err
}
secondPassCallback := func(s string, ss string, k string, v string, bv bool) error {
return set(c, config, s, ss, k, v, bv, true)
}
err = read(c, secondPassCallback, fset, file, src)
if err != nil {
return err
}
return c.Done()
}
// ReadWithCallback reads gcfg formatted data from reader and calls
// callback with each section and option found.
//
// Callback is called with section, subsection, option key, option value
// and blank value flag as arguments.
//
// When a section is found, callback is called with nil subsection, option key
// and option value.
//
// When a subsection is found, callback is called with nil option key and
// option value.
//
// If blank value flag is true, it means that the value was not set for an option
// (as opposed to set to empty string).
//
// If callback returns an error, ReadWithCallback terminates with an error too.
func ReadWithCallback(reader io.Reader, callback func(string, string, string, string, bool) error) error {
src, err := ioutil.ReadAll(reader)
if err != nil {
return err
}
fset := token.NewFileSet()
file := fset.AddFile("", fset.Base(), len(src))
c := warnings.NewCollector(isFatal)
return read(c, callback, fset, file, src)
}
// ReadInto reads gcfg formatted data from reader and sets the values into the
// corresponding fields in config.
func ReadInto(config interface{}, reader io.Reader) error {
src, err := ioutil.ReadAll(reader)
if err != nil {
return err
}
fset := token.NewFileSet()
file := fset.AddFile("", fset.Base(), len(src))
return readInto(config, fset, file, src)
}
// ReadStringInto reads gcfg formatted data from str and sets the values into
// the corresponding fields in config.
func ReadStringInto(config interface{}, str string) error {
r := strings.NewReader(str)
return ReadInto(config, r)
}
// ReadFileInto reads gcfg formatted data from the file filename and sets the
// values into the corresponding fields in config.
func ReadFileInto(config interface{}, filename string) error {
f, err := os.Open(filename)
if err != nil {
return err
}
defer f.Close()
src, err := ioutil.ReadAll(f)
if err != nil {
return err
}
fset := token.NewFileSet()
file := fset.AddFile(filename, fset.Base(), len(src))
return readInto(config, fset, file, src)
}

121
backend/vendor/github.com/src-d/gcfg/scanner/errors.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package scanner
import (
"fmt"
"io"
"sort"
)
import (
"github.com/src-d/gcfg/token"
)
// In an ErrorList, an error is represented by an *Error.
// The position Pos, if valid, points to the beginning of
// the offending token, and the error condition is described
// by Msg.
//
type Error struct {
Pos token.Position
Msg string
}
// Error implements the error interface.
func (e Error) Error() string {
if e.Pos.Filename != "" || e.Pos.IsValid() {
// don't print "<unknown position>"
// TODO(gri) reconsider the semantics of Position.IsValid
return e.Pos.String() + ": " + e.Msg
}
return e.Msg
}
// ErrorList is a list of *Errors.
// The zero value for an ErrorList is an empty ErrorList ready to use.
//
type ErrorList []*Error
// Add adds an Error with given position and error message to an ErrorList.
func (p *ErrorList) Add(pos token.Position, msg string) {
*p = append(*p, &Error{pos, msg})
}
// Reset resets an ErrorList to no errors.
func (p *ErrorList) Reset() { *p = (*p)[0:0] }
// ErrorList implements the sort Interface.
func (p ErrorList) Len() int { return len(p) }
func (p ErrorList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
func (p ErrorList) Less(i, j int) bool {
e := &p[i].Pos
f := &p[j].Pos
if e.Filename < f.Filename {
return true
}
if e.Filename == f.Filename {
return e.Offset < f.Offset
}
return false
}
// Sort sorts an ErrorList. *Error entries are sorted by position,
// other errors are sorted by error message, and before any *Error
// entry.
//
func (p ErrorList) Sort() {
sort.Sort(p)
}
// RemoveMultiples sorts an ErrorList and removes all but the first error per line.
func (p *ErrorList) RemoveMultiples() {
sort.Sort(p)
var last token.Position // initial last.Line is != any legal error line
i := 0
for _, e := range *p {
if e.Pos.Filename != last.Filename || e.Pos.Line != last.Line {
last = e.Pos
(*p)[i] = e
i++
}
}
(*p) = (*p)[0:i]
}
// An ErrorList implements the error interface.
func (p ErrorList) Error() string {
switch len(p) {
case 0:
return "no errors"
case 1:
return p[0].Error()
}
return fmt.Sprintf("%s (and %d more errors)", p[0], len(p)-1)
}
// Err returns an error equivalent to this error list.
// If the list is empty, Err returns nil.
func (p ErrorList) Err() error {
if len(p) == 0 {
return nil
}
return p
}
// PrintError is a utility function that prints a list of errors to w,
// one error per line, if the err parameter is an ErrorList. Otherwise
// it prints the err string.
//
func PrintError(w io.Writer, err error) {
if list, ok := err.(ErrorList); ok {
for _, e := range list {
fmt.Fprintf(w, "%s\n", e)
}
} else if err != nil {
fmt.Fprintf(w, "%s\n", err)
}
}

342
backend/vendor/github.com/src-d/gcfg/scanner/scanner.go generated vendored Normal file
View File

@@ -0,0 +1,342 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package scanner implements a scanner for gcfg configuration text.
// It takes a []byte as source which can then be tokenized
// through repeated calls to the Scan method.
//
// Note that the API for the scanner package may change to accommodate new
// features or implementation changes in gcfg.
//
package scanner
import (
"fmt"
"path/filepath"
"unicode"
"unicode/utf8"
)
import (
"github.com/src-d/gcfg/token"
)
// An ErrorHandler may be provided to Scanner.Init. If a syntax error is
// encountered and a handler was installed, the handler is called with a
// position and an error message. The position points to the beginning of
// the offending token.
//
type ErrorHandler func(pos token.Position, msg string)
// A Scanner holds the scanner's internal state while processing
// a given text. It can be allocated as part of another data
// structure but must be initialized via Init before use.
//
type Scanner struct {
// immutable state
file *token.File // source file handle
dir string // directory portion of file.Name()
src []byte // source
err ErrorHandler // error reporting; or nil
mode Mode // scanning mode
// scanning state
ch rune // current character
offset int // character offset
rdOffset int // reading offset (position after current character)
lineOffset int // current line offset
nextVal bool // next token is expected to be a value
// public state - ok to modify
ErrorCount int // number of errors encountered
}
// Read the next Unicode char into s.ch.
// s.ch < 0 means end-of-file.
//
func (s *Scanner) next() {
if s.rdOffset < len(s.src) {
s.offset = s.rdOffset
if s.ch == '\n' {
s.lineOffset = s.offset
s.file.AddLine(s.offset)
}
r, w := rune(s.src[s.rdOffset]), 1
switch {
case r == 0:
s.error(s.offset, "illegal character NUL")
case r >= 0x80:
// not ASCII
r, w = utf8.DecodeRune(s.src[s.rdOffset:])
if r == utf8.RuneError && w == 1 {
s.error(s.offset, "illegal UTF-8 encoding")
}
}
s.rdOffset += w
s.ch = r
} else {
s.offset = len(s.src)
if s.ch == '\n' {
s.lineOffset = s.offset
s.file.AddLine(s.offset)
}
s.ch = -1 // eof
}
}
// A mode value is a set of flags (or 0).
// They control scanner behavior.
//
type Mode uint
const (
ScanComments Mode = 1 << iota // return comments as COMMENT tokens
)
// Init prepares the scanner s to tokenize the text src by setting the
// scanner at the beginning of src. The scanner uses the file set file
// for position information and it adds line information for each line.
// It is ok to re-use the same file when re-scanning the same file as
// line information which is already present is ignored. Init causes a
// panic if the file size does not match the src size.
//
// Calls to Scan will invoke the error handler err if they encounter a
// syntax error and err is not nil. Also, for each error encountered,
// the Scanner field ErrorCount is incremented by one. The mode parameter
// determines how comments are handled.
//
// Note that Init may call err if there is an error in the first character
// of the file.
//
func (s *Scanner) Init(file *token.File, src []byte, err ErrorHandler, mode Mode) {
// Explicitly initialize all fields since a scanner may be reused.
if file.Size() != len(src) {
panic(fmt.Sprintf("file size (%d) does not match src len (%d)", file.Size(), len(src)))
}
s.file = file
s.dir, _ = filepath.Split(file.Name())
s.src = src
s.err = err
s.mode = mode
s.ch = ' '
s.offset = 0
s.rdOffset = 0
s.lineOffset = 0
s.ErrorCount = 0
s.nextVal = false
s.next()
}
func (s *Scanner) error(offs int, msg string) {
if s.err != nil {
s.err(s.file.Position(s.file.Pos(offs)), msg)
}
s.ErrorCount++
}
func (s *Scanner) scanComment() string {
// initial [;#] already consumed
offs := s.offset - 1 // position of initial [;#]
for s.ch != '\n' && s.ch >= 0 {
s.next()
}
return string(s.src[offs:s.offset])
}
func isLetter(ch rune) bool {
return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch >= 0x80 && unicode.IsLetter(ch)
}
func isDigit(ch rune) bool {
return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch)
}
func (s *Scanner) scanIdentifier() string {
offs := s.offset
for isLetter(s.ch) || isDigit(s.ch) || s.ch == '-' {
s.next()
}
return string(s.src[offs:s.offset])
}
func (s *Scanner) scanEscape(val bool) {
offs := s.offset
ch := s.ch
s.next() // always make progress
switch ch {
case '\\', '"':
// ok
case 'n', 't', 'b':
if val {
break // ok
}
fallthrough
default:
s.error(offs, "unknown escape sequence")
}
}
func (s *Scanner) scanString() string {
// '"' opening already consumed
offs := s.offset - 1
for s.ch != '"' {
ch := s.ch
s.next()
if ch == '\n' || ch < 0 {
s.error(offs, "string not terminated")
break
}
if ch == '\\' {
s.scanEscape(false)
}
}
s.next()
return string(s.src[offs:s.offset])
}
func stripCR(b []byte) []byte {
c := make([]byte, len(b))
i := 0
for _, ch := range b {
if ch != '\r' {
c[i] = ch
i++
}
}
return c[:i]
}
func (s *Scanner) scanValString() string {
offs := s.offset
hasCR := false
end := offs
inQuote := false
loop:
for inQuote || s.ch >= 0 && s.ch != '\n' && s.ch != ';' && s.ch != '#' {
ch := s.ch
s.next()
switch {
case inQuote && ch == '\\':
s.scanEscape(true)
case !inQuote && ch == '\\':
if s.ch == '\r' {
hasCR = true
s.next()
}
if s.ch != '\n' {
s.scanEscape(true)
} else {
s.next()
}
case ch == '"':
inQuote = !inQuote
case ch == '\r':
hasCR = true
case ch < 0 || inQuote && ch == '\n':
s.error(offs, "string not terminated")
break loop
}
if inQuote || !isWhiteSpace(ch) {
end = s.offset
}
}
lit := s.src[offs:end]
if hasCR {
lit = stripCR(lit)
}
return string(lit)
}
func isWhiteSpace(ch rune) bool {
return ch == ' ' || ch == '\t' || ch == '\r'
}
func (s *Scanner) skipWhitespace() {
for isWhiteSpace(s.ch) {
s.next()
}
}
// Scan scans the next token and returns the token position, the token,
// and its literal string if applicable. The source end is indicated by
// token.EOF.
//
// If the returned token is a literal (token.IDENT, token.STRING) or
// token.COMMENT, the literal string has the corresponding value.
//
// If the returned token is token.ILLEGAL, the literal string is the
// offending character.
//
// In all other cases, Scan returns an empty literal string.
//
// For more tolerant parsing, Scan will return a valid token if
// possible even if a syntax error was encountered. Thus, even
// if the resulting token sequence contains no illegal tokens,
// a client may not assume that no error occurred. Instead it
// must check the scanner's ErrorCount or the number of calls
// of the error handler, if there was one installed.
//
// Scan adds line information to the file added to the file
// set with Init. Token positions are relative to that file
// and thus relative to the file set.
//
func (s *Scanner) Scan() (pos token.Pos, tok token.Token, lit string) {
scanAgain:
s.skipWhitespace()
// current token start
pos = s.file.Pos(s.offset)
// determine token value
switch ch := s.ch; {
case s.nextVal:
lit = s.scanValString()
tok = token.STRING
s.nextVal = false
case isLetter(ch):
lit = s.scanIdentifier()
tok = token.IDENT
default:
s.next() // always make progress
switch ch {
case -1:
tok = token.EOF
case '\n':
tok = token.EOL
case '"':
tok = token.STRING
lit = s.scanString()
case '[':
tok = token.LBRACK
case ']':
tok = token.RBRACK
case ';', '#':
// comment
lit = s.scanComment()
if s.mode&ScanComments == 0 {
// skip comment
goto scanAgain
}
tok = token.COMMENT
case '=':
tok = token.ASSIGN
s.nextVal = true
default:
s.error(s.file.Offset(pos), fmt.Sprintf("illegal character %#U", ch))
tok = token.ILLEGAL
lit = string(ch)
}
}
return
}

332
backend/vendor/github.com/src-d/gcfg/set.go generated vendored Normal file
View File

@@ -0,0 +1,332 @@
package gcfg
import (
"bytes"
"encoding/gob"
"fmt"
"math/big"
"reflect"
"strings"
"unicode"
"unicode/utf8"
"github.com/src-d/gcfg/types"
"gopkg.in/warnings.v0"
)
type tag struct {
ident string
intMode string
}
func newTag(ts string) tag {
t := tag{}
s := strings.Split(ts, ",")
t.ident = s[0]
for _, tse := range s[1:] {
if strings.HasPrefix(tse, "int=") {
t.intMode = tse[len("int="):]
}
}
return t
}
func fieldFold(v reflect.Value, name string) (reflect.Value, tag) {
var n string
r0, _ := utf8.DecodeRuneInString(name)
if unicode.IsLetter(r0) && !unicode.IsLower(r0) && !unicode.IsUpper(r0) {
n = "X"
}
n += strings.Replace(name, "-", "_", -1)
f, ok := v.Type().FieldByNameFunc(func(fieldName string) bool {
if !v.FieldByName(fieldName).CanSet() {
return false
}
f, _ := v.Type().FieldByName(fieldName)
t := newTag(f.Tag.Get("gcfg"))
if t.ident != "" {
return strings.EqualFold(t.ident, name)
}
return strings.EqualFold(n, fieldName)
})
if !ok {
return reflect.Value{}, tag{}
}
return v.FieldByName(f.Name), newTag(f.Tag.Get("gcfg"))
}
type setter func(destp interface{}, blank bool, val string, t tag) error
var errUnsupportedType = fmt.Errorf("unsupported type")
var errBlankUnsupported = fmt.Errorf("blank value not supported for type")
var setters = []setter{
typeSetter, textUnmarshalerSetter, kindSetter, scanSetter,
}
func textUnmarshalerSetter(d interface{}, blank bool, val string, t tag) error {
dtu, ok := d.(textUnmarshaler)
if !ok {
return errUnsupportedType
}
if blank {
return errBlankUnsupported
}
return dtu.UnmarshalText([]byte(val))
}
func boolSetter(d interface{}, blank bool, val string, t tag) error {
if blank {
reflect.ValueOf(d).Elem().Set(reflect.ValueOf(true))
return nil
}
b, err := types.ParseBool(val)
if err == nil {
reflect.ValueOf(d).Elem().Set(reflect.ValueOf(b))
}
return err
}
func intMode(mode string) types.IntMode {
var m types.IntMode
if strings.ContainsAny(mode, "dD") {
m |= types.Dec
}
if strings.ContainsAny(mode, "hH") {
m |= types.Hex
}
if strings.ContainsAny(mode, "oO") {
m |= types.Oct
}
return m
}
var typeModes = map[reflect.Type]types.IntMode{
reflect.TypeOf(int(0)): types.Dec | types.Hex,
reflect.TypeOf(int8(0)): types.Dec | types.Hex,
reflect.TypeOf(int16(0)): types.Dec | types.Hex,
reflect.TypeOf(int32(0)): types.Dec | types.Hex,
reflect.TypeOf(int64(0)): types.Dec | types.Hex,
reflect.TypeOf(uint(0)): types.Dec | types.Hex,
reflect.TypeOf(uint8(0)): types.Dec | types.Hex,
reflect.TypeOf(uint16(0)): types.Dec | types.Hex,
reflect.TypeOf(uint32(0)): types.Dec | types.Hex,
reflect.TypeOf(uint64(0)): types.Dec | types.Hex,
// use default mode (allow dec/hex/oct) for uintptr type
reflect.TypeOf(big.Int{}): types.Dec | types.Hex,
}
func intModeDefault(t reflect.Type) types.IntMode {
m, ok := typeModes[t]
if !ok {
m = types.Dec | types.Hex | types.Oct
}
return m
}
func intSetter(d interface{}, blank bool, val string, t tag) error {
if blank {
return errBlankUnsupported
}
mode := intMode(t.intMode)
if mode == 0 {
mode = intModeDefault(reflect.TypeOf(d).Elem())
}
return types.ParseInt(d, val, mode)
}
func stringSetter(d interface{}, blank bool, val string, t tag) error {
if blank {
return errBlankUnsupported
}
dsp, ok := d.(*string)
if !ok {
return errUnsupportedType
}
*dsp = val
return nil
}
var kindSetters = map[reflect.Kind]setter{
reflect.String: stringSetter,
reflect.Bool: boolSetter,
reflect.Int: intSetter,
reflect.Int8: intSetter,
reflect.Int16: intSetter,
reflect.Int32: intSetter,
reflect.Int64: intSetter,
reflect.Uint: intSetter,
reflect.Uint8: intSetter,
reflect.Uint16: intSetter,
reflect.Uint32: intSetter,
reflect.Uint64: intSetter,
reflect.Uintptr: intSetter,
}
var typeSetters = map[reflect.Type]setter{
reflect.TypeOf(big.Int{}): intSetter,
}
func typeSetter(d interface{}, blank bool, val string, tt tag) error {
t := reflect.ValueOf(d).Type().Elem()
setter, ok := typeSetters[t]
if !ok {
return errUnsupportedType
}
return setter(d, blank, val, tt)
}
func kindSetter(d interface{}, blank bool, val string, tt tag) error {
k := reflect.ValueOf(d).Type().Elem().Kind()
setter, ok := kindSetters[k]
if !ok {
return errUnsupportedType
}
return setter(d, blank, val, tt)
}
func scanSetter(d interface{}, blank bool, val string, tt tag) error {
if blank {
return errBlankUnsupported
}
return types.ScanFully(d, val, 'v')
}
func newValue(c *warnings.Collector, sect string, vCfg reflect.Value,
vType reflect.Type) (reflect.Value, error) {
//
pv := reflect.New(vType)
dfltName := "default-" + sect
dfltField, _ := fieldFold(vCfg, dfltName)
var err error
if dfltField.IsValid() {
b := bytes.NewBuffer(nil)
ge := gob.NewEncoder(b)
if err = c.Collect(ge.EncodeValue(dfltField)); err != nil {
return pv, err
}
gd := gob.NewDecoder(bytes.NewReader(b.Bytes()))
if err = c.Collect(gd.DecodeValue(pv.Elem())); err != nil {
return pv, err
}
}
return pv, nil
}
func set(c *warnings.Collector, cfg interface{}, sect, sub, name string,
value string, blankValue bool, subsectPass bool) error {
//
vPCfg := reflect.ValueOf(cfg)
if vPCfg.Kind() != reflect.Ptr || vPCfg.Elem().Kind() != reflect.Struct {
panic(fmt.Errorf("config must be a pointer to a struct"))
}
vCfg := vPCfg.Elem()
vSect, _ := fieldFold(vCfg, sect)
if !vSect.IsValid() {
err := extraData{section: sect}
return c.Collect(err)
}
isSubsect := vSect.Kind() == reflect.Map
if subsectPass != isSubsect {
return nil
}
if isSubsect {
vst := vSect.Type()
if vst.Key().Kind() != reflect.String ||
vst.Elem().Kind() != reflect.Ptr ||
vst.Elem().Elem().Kind() != reflect.Struct {
panic(fmt.Errorf("map field for section must have string keys and "+
" pointer-to-struct values: section %q", sect))
}
if vSect.IsNil() {
vSect.Set(reflect.MakeMap(vst))
}
k := reflect.ValueOf(sub)
pv := vSect.MapIndex(k)
if !pv.IsValid() {
vType := vSect.Type().Elem().Elem()
var err error
if pv, err = newValue(c, sect, vCfg, vType); err != nil {
return err
}
vSect.SetMapIndex(k, pv)
}
vSect = pv.Elem()
} else if vSect.Kind() != reflect.Struct {
panic(fmt.Errorf("field for section must be a map or a struct: "+
"section %q", sect))
} else if sub != "" {
err := extraData{section: sect, subsection: &sub}
return c.Collect(err)
}
// Empty name is a special value, meaning that only the
// section/subsection object is to be created, with no values set.
if name == "" {
return nil
}
vVar, t := fieldFold(vSect, name)
if !vVar.IsValid() {
var err error
if isSubsect {
err = extraData{section: sect, subsection: &sub, variable: &name}
} else {
err = extraData{section: sect, variable: &name}
}
return c.Collect(err)
}
// vVal is either single-valued var, or newly allocated value within multi-valued var
var vVal reflect.Value
// multi-value if unnamed slice type
isMulti := vVar.Type().Name() == "" && vVar.Kind() == reflect.Slice ||
vVar.Type().Name() == "" && vVar.Kind() == reflect.Ptr && vVar.Type().Elem().Name() == "" && vVar.Type().Elem().Kind() == reflect.Slice
if isMulti && vVar.Kind() == reflect.Ptr {
if vVar.IsNil() {
vVar.Set(reflect.New(vVar.Type().Elem()))
}
vVar = vVar.Elem()
}
if isMulti && blankValue {
vVar.Set(reflect.Zero(vVar.Type()))
return nil
}
if isMulti {
vVal = reflect.New(vVar.Type().Elem()).Elem()
} else {
vVal = vVar
}
isDeref := vVal.Type().Name() == "" && vVal.Type().Kind() == reflect.Ptr
isNew := isDeref && vVal.IsNil()
// vAddr is address of value to set (dereferenced & allocated as needed)
var vAddr reflect.Value
switch {
case isNew:
vAddr = reflect.New(vVal.Type().Elem())
case isDeref && !isNew:
vAddr = vVal
default:
vAddr = vVal.Addr()
}
vAddrI := vAddr.Interface()
err, ok := error(nil), false
for _, s := range setters {
err = s(vAddrI, blankValue, value, t)
if err == nil {
ok = true
break
}
if err != errUnsupportedType {
return err
}
}
if !ok {
// in case all setters returned errUnsupportedType
return err
}
if isNew { // set reference if it was dereferenced and newly allocated
vVal.Set(vAddr)
}
if isMulti { // append if multi-valued
vVar.Set(reflect.Append(vVar, vVal))
}
return nil
}

435
backend/vendor/github.com/src-d/gcfg/token/position.go generated vendored Normal file
View File

@@ -0,0 +1,435 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// TODO(gri) consider making this a separate package outside the go directory.
package token
import (
"fmt"
"sort"
"sync"
)
// -----------------------------------------------------------------------------
// Positions
// Position describes an arbitrary source position
// including the file, line, and column location.
// A Position is valid if the line number is > 0.
//
type Position struct {
Filename string // filename, if any
Offset int // offset, starting at 0
Line int // line number, starting at 1
Column int // column number, starting at 1 (character count)
}
// IsValid returns true if the position is valid.
func (pos *Position) IsValid() bool { return pos.Line > 0 }
// String returns a string in one of several forms:
//
// file:line:column valid position with file name
// line:column valid position without file name
// file invalid position with file name
// - invalid position without file name
//
func (pos Position) String() string {
s := pos.Filename
if pos.IsValid() {
if s != "" {
s += ":"
}
s += fmt.Sprintf("%d:%d", pos.Line, pos.Column)
}
if s == "" {
s = "-"
}
return s
}
// Pos is a compact encoding of a source position within a file set.
// It can be converted into a Position for a more convenient, but much
// larger, representation.
//
// The Pos value for a given file is a number in the range [base, base+size],
// where base and size are specified when adding the file to the file set via
// AddFile.
//
// To create the Pos value for a specific source offset, first add
// the respective file to the current file set (via FileSet.AddFile)
// and then call File.Pos(offset) for that file. Given a Pos value p
// for a specific file set fset, the corresponding Position value is
// obtained by calling fset.Position(p).
//
// Pos values can be compared directly with the usual comparison operators:
// If two Pos values p and q are in the same file, comparing p and q is
// equivalent to comparing the respective source file offsets. If p and q
// are in different files, p < q is true if the file implied by p was added
// to the respective file set before the file implied by q.
//
type Pos int
// The zero value for Pos is NoPos; there is no file and line information
// associated with it, and NoPos().IsValid() is false. NoPos is always
// smaller than any other Pos value. The corresponding Position value
// for NoPos is the zero value for Position.
//
const NoPos Pos = 0
// IsValid returns true if the position is valid.
func (p Pos) IsValid() bool {
return p != NoPos
}
// -----------------------------------------------------------------------------
// File
// A File is a handle for a file belonging to a FileSet.
// A File has a name, size, and line offset table.
//
type File struct {
set *FileSet
name string // file name as provided to AddFile
base int // Pos value range for this file is [base...base+size]
size int // file size as provided to AddFile
// lines and infos are protected by set.mutex
lines []int
infos []lineInfo
}
// Name returns the file name of file f as registered with AddFile.
func (f *File) Name() string {
return f.name
}
// Base returns the base offset of file f as registered with AddFile.
func (f *File) Base() int {
return f.base
}
// Size returns the size of file f as registered with AddFile.
func (f *File) Size() int {
return f.size
}
// LineCount returns the number of lines in file f.
func (f *File) LineCount() int {
f.set.mutex.RLock()
n := len(f.lines)
f.set.mutex.RUnlock()
return n
}
// AddLine adds the line offset for a new line.
// The line offset must be larger than the offset for the previous line
// and smaller than the file size; otherwise the line offset is ignored.
//
func (f *File) AddLine(offset int) {
f.set.mutex.Lock()
if i := len(f.lines); (i == 0 || f.lines[i-1] < offset) && offset < f.size {
f.lines = append(f.lines, offset)
}
f.set.mutex.Unlock()
}
// SetLines sets the line offsets for a file and returns true if successful.
// The line offsets are the offsets of the first character of each line;
// for instance for the content "ab\nc\n" the line offsets are {0, 3}.
// An empty file has an empty line offset table.
// Each line offset must be larger than the offset for the previous line
// and smaller than the file size; otherwise SetLines fails and returns
// false.
//
func (f *File) SetLines(lines []int) bool {
// verify validity of lines table
size := f.size
for i, offset := range lines {
if i > 0 && offset <= lines[i-1] || size <= offset {
return false
}
}
// set lines table
f.set.mutex.Lock()
f.lines = lines
f.set.mutex.Unlock()
return true
}
// SetLinesForContent sets the line offsets for the given file content.
func (f *File) SetLinesForContent(content []byte) {
var lines []int
line := 0
for offset, b := range content {
if line >= 0 {
lines = append(lines, line)
}
line = -1
if b == '\n' {
line = offset + 1
}
}
// set lines table
f.set.mutex.Lock()
f.lines = lines
f.set.mutex.Unlock()
}
// A lineInfo object describes alternative file and line number
// information (such as provided via a //line comment in a .go
// file) for a given file offset.
type lineInfo struct {
// fields are exported to make them accessible to gob
Offset int
Filename string
Line int
}
// AddLineInfo adds alternative file and line number information for
// a given file offset. The offset must be larger than the offset for
// the previously added alternative line info and smaller than the
// file size; otherwise the information is ignored.
//
// AddLineInfo is typically used to register alternative position
// information for //line filename:line comments in source files.
//
func (f *File) AddLineInfo(offset int, filename string, line int) {
f.set.mutex.Lock()
if i := len(f.infos); i == 0 || f.infos[i-1].Offset < offset && offset < f.size {
f.infos = append(f.infos, lineInfo{offset, filename, line})
}
f.set.mutex.Unlock()
}
// Pos returns the Pos value for the given file offset;
// the offset must be <= f.Size().
// f.Pos(f.Offset(p)) == p.
//
func (f *File) Pos(offset int) Pos {
if offset > f.size {
panic("illegal file offset")
}
return Pos(f.base + offset)
}
// Offset returns the offset for the given file position p;
// p must be a valid Pos value in that file.
// f.Offset(f.Pos(offset)) == offset.
//
func (f *File) Offset(p Pos) int {
if int(p) < f.base || int(p) > f.base+f.size {
panic("illegal Pos value")
}
return int(p) - f.base
}
// Line returns the line number for the given file position p;
// p must be a Pos value in that file or NoPos.
//
func (f *File) Line(p Pos) int {
// TODO(gri) this can be implemented much more efficiently
return f.Position(p).Line
}
func searchLineInfos(a []lineInfo, x int) int {
return sort.Search(len(a), func(i int) bool { return a[i].Offset > x }) - 1
}
// info returns the file name, line, and column number for a file offset.
func (f *File) info(offset int) (filename string, line, column int) {
filename = f.name
if i := searchInts(f.lines, offset); i >= 0 {
line, column = i+1, offset-f.lines[i]+1
}
if len(f.infos) > 0 {
// almost no files have extra line infos
if i := searchLineInfos(f.infos, offset); i >= 0 {
alt := &f.infos[i]
filename = alt.Filename
if i := searchInts(f.lines, alt.Offset); i >= 0 {
line += alt.Line - i - 1
}
}
}
return
}
func (f *File) position(p Pos) (pos Position) {
offset := int(p) - f.base
pos.Offset = offset
pos.Filename, pos.Line, pos.Column = f.info(offset)
return
}
// Position returns the Position value for the given file position p;
// p must be a Pos value in that file or NoPos.
//
func (f *File) Position(p Pos) (pos Position) {
if p != NoPos {
if int(p) < f.base || int(p) > f.base+f.size {
panic("illegal Pos value")
}
pos = f.position(p)
}
return
}
// -----------------------------------------------------------------------------
// FileSet
// A FileSet represents a set of source files.
// Methods of file sets are synchronized; multiple goroutines
// may invoke them concurrently.
//
type FileSet struct {
mutex sync.RWMutex // protects the file set
base int // base offset for the next file
files []*File // list of files in the order added to the set
last *File // cache of last file looked up
}
// NewFileSet creates a new file set.
func NewFileSet() *FileSet {
s := new(FileSet)
s.base = 1 // 0 == NoPos
return s
}
// Base returns the minimum base offset that must be provided to
// AddFile when adding the next file.
//
func (s *FileSet) Base() int {
s.mutex.RLock()
b := s.base
s.mutex.RUnlock()
return b
}
// AddFile adds a new file with a given filename, base offset, and file size
// to the file set s and returns the file. Multiple files may have the same
// name. The base offset must not be smaller than the FileSet's Base(), and
// size must not be negative.
//
// Adding the file will set the file set's Base() value to base + size + 1
// as the minimum base value for the next file. The following relationship
// exists between a Pos value p for a given file offset offs:
//
// int(p) = base + offs
//
// with offs in the range [0, size] and thus p in the range [base, base+size].
// For convenience, File.Pos may be used to create file-specific position
// values from a file offset.
//
func (s *FileSet) AddFile(filename string, base, size int) *File {
s.mutex.Lock()
defer s.mutex.Unlock()
if base < s.base || size < 0 {
panic("illegal base or size")
}
// base >= s.base && size >= 0
f := &File{s, filename, base, size, []int{0}, nil}
base += size + 1 // +1 because EOF also has a position
if base < 0 {
panic("token.Pos offset overflow (> 2G of source code in file set)")
}
// add the file to the file set
s.base = base
s.files = append(s.files, f)
s.last = f
return f
}
// Iterate calls f for the files in the file set in the order they were added
// until f returns false.
//
func (s *FileSet) Iterate(f func(*File) bool) {
for i := 0; ; i++ {
var file *File
s.mutex.RLock()
if i < len(s.files) {
file = s.files[i]
}
s.mutex.RUnlock()
if file == nil || !f(file) {
break
}
}
}
func searchFiles(a []*File, x int) int {
return sort.Search(len(a), func(i int) bool { return a[i].base > x }) - 1
}
func (s *FileSet) file(p Pos) *File {
// common case: p is in last file
if f := s.last; f != nil && f.base <= int(p) && int(p) <= f.base+f.size {
return f
}
// p is not in last file - search all files
if i := searchFiles(s.files, int(p)); i >= 0 {
f := s.files[i]
// f.base <= int(p) by definition of searchFiles
if int(p) <= f.base+f.size {
s.last = f
return f
}
}
return nil
}
// File returns the file that contains the position p.
// If no such file is found (for instance for p == NoPos),
// the result is nil.
//
func (s *FileSet) File(p Pos) (f *File) {
if p != NoPos {
s.mutex.RLock()
f = s.file(p)
s.mutex.RUnlock()
}
return
}
// Position converts a Pos in the fileset into a general Position.
func (s *FileSet) Position(p Pos) (pos Position) {
if p != NoPos {
s.mutex.RLock()
if f := s.file(p); f != nil {
pos = f.position(p)
}
s.mutex.RUnlock()
}
return
}
// -----------------------------------------------------------------------------
// Helper functions
func searchInts(a []int, x int) int {
// This function body is a manually inlined version of:
//
// return sort.Search(len(a), func(i int) bool { return a[i] > x }) - 1
//
// With better compiler optimizations, this may not be needed in the
// future, but at the moment this change improves the go/printer
// benchmark performance by ~30%. This has a direct impact on the
// speed of gofmt and thus seems worthwhile (2011-04-29).
// TODO(gri): Remove this when compilers have caught up.
i, j := 0, len(a)
for i < j {
h := i + (j-i)/2 // avoid overflow when computing h
// i ≤ h < j
if a[h] <= x {
i = h + 1
} else {
j = h
}
}
return i - 1
}

View File

@@ -0,0 +1,56 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package token
type serializedFile struct {
// fields correspond 1:1 to fields with same (lower-case) name in File
Name string
Base int
Size int
Lines []int
Infos []lineInfo
}
type serializedFileSet struct {
Base int
Files []serializedFile
}
// Read calls decode to deserialize a file set into s; s must not be nil.
func (s *FileSet) Read(decode func(interface{}) error) error {
var ss serializedFileSet
if err := decode(&ss); err != nil {
return err
}
s.mutex.Lock()
s.base = ss.Base
files := make([]*File, len(ss.Files))
for i := 0; i < len(ss.Files); i++ {
f := &ss.Files[i]
files[i] = &File{s, f.Name, f.Base, f.Size, f.Lines, f.Infos}
}
s.files = files
s.last = nil
s.mutex.Unlock()
return nil
}
// Write calls encode to serialize the file set s.
func (s *FileSet) Write(encode func(interface{}) error) error {
var ss serializedFileSet
s.mutex.Lock()
ss.Base = s.base
files := make([]serializedFile, len(s.files))
for i, f := range s.files {
files[i] = serializedFile{f.name, f.base, f.size, f.lines, f.infos}
}
ss.Files = files
s.mutex.Unlock()
return encode(ss)
}

83
backend/vendor/github.com/src-d/gcfg/token/token.go generated vendored Normal file
View File

@@ -0,0 +1,83 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package token defines constants representing the lexical tokens of the gcfg
// configuration syntax and basic operations on tokens (printing, predicates).
//
// Note that the API for the token package may change to accommodate new
// features or implementation changes in gcfg.
//
package token
import "strconv"
// Token is the set of lexical tokens of the gcfg configuration syntax.
type Token int
// The list of tokens.
const (
// Special tokens
ILLEGAL Token = iota
EOF
COMMENT
literal_beg
// Identifiers and basic type literals
// (these tokens stand for classes of literals)
IDENT // section-name, variable-name
STRING // "subsection-name", variable value
literal_end
operator_beg
// Operators and delimiters
ASSIGN // =
LBRACK // [
RBRACK // ]
EOL // \n
operator_end
)
var tokens = [...]string{
ILLEGAL: "ILLEGAL",
EOF: "EOF",
COMMENT: "COMMENT",
IDENT: "IDENT",
STRING: "STRING",
ASSIGN: "=",
LBRACK: "[",
RBRACK: "]",
EOL: "\n",
}
// String returns the string corresponding to the token tok.
// For operators and delimiters, the string is the actual token character
// sequence (e.g., for the token ASSIGN, the string is "="). For all other
// tokens the string corresponds to the token constant name (e.g. for the
// token IDENT, the string is "IDENT").
//
func (tok Token) String() string {
s := ""
if 0 <= tok && tok < Token(len(tokens)) {
s = tokens[tok]
}
if s == "" {
s = "token(" + strconv.Itoa(int(tok)) + ")"
}
return s
}
// Predicates
// IsLiteral returns true for tokens corresponding to identifiers
// and basic type literals; it returns false otherwise.
//
func (tok Token) IsLiteral() bool { return literal_beg < tok && tok < literal_end }
// IsOperator returns true for tokens corresponding to operators and
// delimiters; it returns false otherwise.
//
func (tok Token) IsOperator() bool { return operator_beg < tok && tok < operator_end }

23
backend/vendor/github.com/src-d/gcfg/types/bool.go generated vendored Normal file
View File

@@ -0,0 +1,23 @@
package types
// BoolValues defines the name and value mappings for ParseBool.
var BoolValues = map[string]interface{}{
"true": true, "yes": true, "on": true, "1": true,
"false": false, "no": false, "off": false, "0": false,
}
var boolParser = func() *EnumParser {
ep := &EnumParser{}
ep.AddVals(BoolValues)
return ep
}()
// ParseBool parses bool values according to the definitions in BoolValues.
// Parsing is case-insensitive.
func ParseBool(s string) (bool, error) {
v, err := boolParser.Parse(s)
if err != nil {
return false, err
}
return v.(bool), nil
}

4
backend/vendor/github.com/src-d/gcfg/types/doc.go generated vendored Normal file
View File

@@ -0,0 +1,4 @@
// Package types defines helpers for type conversions.
//
// The API for this package is not finalized yet.
package types

44
backend/vendor/github.com/src-d/gcfg/types/enum.go generated vendored Normal file
View File

@@ -0,0 +1,44 @@
package types
import (
"fmt"
"reflect"
"strings"
)
// EnumParser parses "enum" values; i.e. a predefined set of strings to
// predefined values.
type EnumParser struct {
Type string // type name; if not set, use type of first value added
CaseMatch bool // if true, matching of strings is case-sensitive
// PrefixMatch bool
vals map[string]interface{}
}
// AddVals adds strings and values to an EnumParser.
func (ep *EnumParser) AddVals(vals map[string]interface{}) {
if ep.vals == nil {
ep.vals = make(map[string]interface{})
}
for k, v := range vals {
if ep.Type == "" {
ep.Type = reflect.TypeOf(v).Name()
}
if !ep.CaseMatch {
k = strings.ToLower(k)
}
ep.vals[k] = v
}
}
// Parse parses the string and returns the value or an error.
func (ep EnumParser) Parse(s string) (interface{}, error) {
if !ep.CaseMatch {
s = strings.ToLower(s)
}
v, ok := ep.vals[s]
if !ok {
return false, fmt.Errorf("failed to parse %s %#q", ep.Type, s)
}
return v, nil
}

86
backend/vendor/github.com/src-d/gcfg/types/int.go generated vendored Normal file
View File

@@ -0,0 +1,86 @@
package types
import (
"fmt"
"strings"
)
// An IntMode is a mode for parsing integer values, representing a set of
// accepted bases.
type IntMode uint8
// IntMode values for ParseInt; can be combined using binary or.
const (
Dec IntMode = 1 << iota
Hex
Oct
)
// String returns a string representation of IntMode; e.g. `IntMode(Dec|Hex)`.
func (m IntMode) String() string {
var modes []string
if m&Dec != 0 {
modes = append(modes, "Dec")
}
if m&Hex != 0 {
modes = append(modes, "Hex")
}
if m&Oct != 0 {
modes = append(modes, "Oct")
}
return "IntMode(" + strings.Join(modes, "|") + ")"
}
var errIntAmbig = fmt.Errorf("ambiguous integer value; must include '0' prefix")
func prefix0(val string) bool {
return strings.HasPrefix(val, "0") || strings.HasPrefix(val, "-0")
}
func prefix0x(val string) bool {
return strings.HasPrefix(val, "0x") || strings.HasPrefix(val, "-0x")
}
// ParseInt parses val using mode into intptr, which must be a pointer to an
// integer kind type. Non-decimal value require prefix `0` or `0x` in the cases
// when mode permits ambiguity of base; otherwise the prefix can be omitted.
func ParseInt(intptr interface{}, val string, mode IntMode) error {
val = strings.TrimSpace(val)
verb := byte(0)
switch mode {
case Dec:
verb = 'd'
case Dec + Hex:
if prefix0x(val) {
verb = 'v'
} else {
verb = 'd'
}
case Dec + Oct:
if prefix0(val) && !prefix0x(val) {
verb = 'v'
} else {
verb = 'd'
}
case Dec + Hex + Oct:
verb = 'v'
case Hex:
if prefix0x(val) {
verb = 'v'
} else {
verb = 'x'
}
case Oct:
verb = 'o'
case Hex + Oct:
if prefix0(val) {
verb = 'v'
} else {
return errIntAmbig
}
}
if verb == 0 {
panic("unsupported mode")
}
return ScanFully(intptr, val, verb)
}

23
backend/vendor/github.com/src-d/gcfg/types/scan.go generated vendored Normal file
View File

@@ -0,0 +1,23 @@
package types
import (
"fmt"
"io"
"reflect"
)
// ScanFully uses fmt.Sscanf with verb to fully scan val into ptr.
func ScanFully(ptr interface{}, val string, verb byte) error {
t := reflect.ValueOf(ptr).Elem().Type()
// attempt to read extra bytes to make sure the value is consumed
var b []byte
n, err := fmt.Sscanf(val, "%"+string(verb)+"%s", ptr, &b)
switch {
case n < 1 || n == 1 && err != io.EOF:
return fmt.Errorf("failed to parse %q as %v: %v", val, t, err)
case n > 1:
return fmt.Errorf("failed to parse %q as %v: extra characters %q", val, t, string(b))
}
// n == 1 && err == io.EOF
return nil
}

Some files were not shown because too many files have changed in this diff Show More