??xml version="1.0" encoding="utf-8" standalone="yes"?>
(金庆的专?2021.2)
如下创徏 StatefulSet ?Headless Service: test.yaml
```
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- port: 80
name: aaaa
- port: 20080
name: bbbb
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myhttpd
image: httpd
ports:
- containerPort: 80
- containerPort: 20080
```
部vQ?br />```
kubectl apply -f test.yaml
```
然后开一?shell:
```
kubectl run mygolang -it --image golang -- bash
If you don't see a command prompt, try pressing enter.
root@mygolang:/go#
```
先装?vim:
```
apt install vim
```
然后写个 golang 试E序Q向 DNS 查询 bbbb 服务的地址与端口:
```
root@mygolang:/jinqing# cat main.go
package main
import (
"fmt"
"github.com/davecgh/go-spew/spew"
"net"
)
func main() {
cname, addresses, err := net.LookupSRV("bbbb", "tcp", "headless-svc.default.svc.cluster.local")
if err != nil {
fmt.Printf("failed: %s\n", err)
}
fmt.Printf("cname: %s\n", cname)
spew.Dump(addresses)
}
root@mygolang:/jinqing#
```
q行l果为:
```
cname: _bbbb._tcp.headless-svc.default.svc.cluster.local.
([]*net.SRV) (len=3 cap=4) {
(*net.SRV)(0xc00000e220)({
Target: (string) (len=58) "statefulset-test-0.headless-svc.default.svc.cluster.local.",
Port: (uint16) 20080,
Priority: (uint16) 0,
Weight: (uint16) 33
}),
(*net.SRV)(0xc00000e1e0)({
Target: (string) (len=58) "statefulset-test-2.headless-svc.default.svc.cluster.local.",
Port: (uint16) 20080,
Priority: (uint16) 0,
Weight: (uint16) 33
}),
(*net.SRV)(0xc00000e200)({
Target: (string) (len=58) "statefulset-test-1.headless-svc.default.svc.cluster.local.",
Port: (uint16) 20080,
Priority: (uint16) 0,
Weight: (uint16) 33
})
}
```
多次q行发现l果Ҏ(gu)序固定,q没有按权重随机?br />
Service yaml 定义中,必须为每个端口命名,不然没法查询?br />
```
cname, addresses, err := net.LookupSRV("bbbb", "tcp", "headless-svc.default.svc.cluster.local")
```
可利用默认域名后~写ؓ
```
cname, addresses, err := net.LookupSRV("bbbb", "tcp", "headless-svc")
```
输出相同?br />
]]>
(金庆的专?2020.9)
在压中发现L几个h时Q超时时长设大也会有Q而成功的h延时q小于超时时间?br />查错的第一方向是查|络库中有消息丢失?br />跟踪所有消息,发现时的消息应该是正常处理q返回了?br />于是查接收应{消息后的处理,最l找C码:
```go
func (c *Client) onRpcRet(cbIndex uint32, ...) {
ii, ok := c.callbacks.Load(cbIndex)
if !ok {
// logger.Errorf("onRpcRet can not find cbIndex %d", cbIndex) // 可能是超时已?br /> return
}
```
打开日志Q发现超时的h对应有该条错误日志?br />此处回调不存在的情况Q正常是先超时删除回调,然后再收到应{?br />现在是先收到了应{,发现找不到回调,然后q了一D|间会被判时无响应?br />
下面代码换个次序就好了Q?br />```go
if err := c.Session.Send(msg); err != nil {
...
return
}
c.callbacks.Store(cbIndex, ...)
```
改ؓ
```go
// 必须先设回调Q然后发送,因ؓ应答可能会很?br /> c.callbacks.Store(cbIndex, ...)
if err := c.Session.Send(msg); err...
```
压测时因为加压机CPU是满负蝲q{Q所?Send() ?Store() 之间可能会间隔数毫秒Q?br />_ rpc h处理完成q返回,而应{返回时回调q没讄?br />
?Send() ?Store() 写代码会E微单点Q因?Send() p|后可以直接返回?br />?Store() ?Send() ӞSend() p|则需要相?Delete().
]]>
(金庆的专?2020.9)
golang ?math 包已l定义了以下帔RQ?br />
Constants
```
const (
E = 2.71828182845904523536028747135266249775724709369995957496696763 // https://oeis.org/A001113
Pi = 3.14159265358979323846264338327950288419716939937510582097494459 // https://oeis.org/A000796
Phi = 1.61803398874989484820458683436563811772030917980576286213544862 // https://oeis.org/A001622
Sqrt2 = 1.41421356237309504880168872420969807856967187537694807317667974 // https://oeis.org/A002193
SqrtE = 1.64872127070012814684865078781416357165377610071014801157507931 // https://oeis.org/A019774
SqrtPi = 1.77245385090551602729816748334114518279754945612238712821380779 // https://oeis.org/A002161
SqrtPhi = 1.27201964951406896425242246173749149171560804184009624861664038 // https://oeis.org/A139339
Ln2 = 0.693147180559945309417232121458176568075500134360255254120680009 // https://oeis.org/A002162
Log2E = 1 / Ln2
Ln10 = 2.30258509299404568401799145468436420760110148862877297603332790 // https://oeis.org/A002392
Log10E = 1 / Ln10
)
```
Mathematical constants.
```
const (
MaxFloat32 = 3.40282346638528859811704183484516925440e+38 // 2**127 * (2**24 - 1) / 2**23
SmallestNonzeroFloat32 = 1.401298464324817070923729583289916131280e-45 // 1 / 2**(127 - 1 + 23)
MaxFloat64 = 1.797693134862315708145274237317043567981e+308 // 2**1023 * (2**53 - 1) / 2**52
SmallestNonzeroFloat64 = 4.940656458412465441765687928682213723651e-324 // 1 / 2**(1023 - 1 + 52)
)
```
Floating-point limit values. Max is the largest finite value representable by the type. SmallestNonzero is the smallest positive, non-zero value representable by the type.
```
const (
MaxInt8 = 1<<7 - 1
MinInt8 = -1 << 7
MaxInt16 = 1<<15 - 1
MinInt16 = -1 << 15
MaxInt32 = 1<<31 - 1
MinInt32 = -1 << 31
MaxInt64 = 1<<63 - 1
MinInt64 = -1 << 63
MaxUint8 = 1<<8 - 1
MaxUint16 = 1<<16 - 1
MaxUint32 = 1<<32 - 1
MaxUint64 = 1<<64 - 1
)
```
Integer limit values.
]]>
(金庆的专?2020.7)
golang中可以将参数cd设ؓ interface{}, q样可以传入Q意类型的参数Q?br />?C++ ?void* 的作用相伹{?br />但是q种万能cd应该量用Q尽量用具体的cdQ或者用一个具体的接口cd?br />主要的原因是, 让编译期的类型检查挡住编码错误,减少q行期的错误?br />
例如Qgo-mongo-driver 有个创徏索引的参敎ͼ
```
type IndexModel struct {
// A document describing which keys should be used for the index. It cannot be nil.
// This must be an order-preserving type such as bson.D. Map types such as bson.M are not valid.
Keys interface{}
...
}
```
其中 Keys 可以是Q意类型,?1234, "abcd", 当然不符合烦引要求的cd会返回失败?br />但是 bson.M cdQ会创徏索引成功Q但是烦引的ơ序会有错误?br />注释中已指出Q不要用 bson.M, 应该使用 bson.D.
正确?Keys 如下Q表C复合烦?(field1, field2)Q?表示正序Q?1则反序:
```
indexModel.Keys := bson.D{{"field1", 1}, {"field2", 1}}
```
如果使用 bson.M, 实际上是?mapQ?br />```
indexModel.Keys := bson.M{"field1"Q?, "field2": 1}
```
因ؓ map 成员的次序不定,最后创建的索引可能?(field1, field2)Q也可能?(field2, field1)?br />
此处cd允许 interface{} 的想法是Q允怓Q意类型,?bson ~码后传l?mongo 服务器,q不会进行类型检查?br />q种灉|性非常容易造成错误Qƈ且如何用也不明? 仅靠注释作用很小?br />
]]>
(金庆的专?2020.4)
grpc Ҏ(gu)个请求进行负载均衡。负载均衡的方式有:
* 代理模式
* 客户端实?br />* 外部负蝲均衡
参考:gRPC LB https://blog.csdn.net/xiaojia1100/article/details/78842295
gRPC 中负载均衡的主要机制是外部负载均衡?br />
gRPC 定义了外部负载均衡服务的接口Qhttps://github.com/grpc/grpc/tree/master/src/proto/grpc/lb/v1
* load_balancer.proto 客户端向 lb 服查询后端列?br />* load_reporter.proto lb 服向后端服查询负?br />
https://github.com/bsm/grpclb 实现了一?grpc 的外部负载均衡服?br />因ؓ其实现早于负载均衡服的接口规范,所以接口定义与 grpc 规范不同?br />?issue#26: https://github.com/bsm/grpclb/issues/26#issuecomment-613873655
grpclb 目前仅支?consul 服务发现?br />
标准?grpclb 实现目前好像只有 https://github.com/joa/jawlb?br />jawlb 通过 Kubernetes API 来发现服务?br />
以下试 grpc 客户端从 jawlb 服查询服务器列表Q然后请求服务?br />首先在本机开了多?greeter 服实例,端口不同?br />然后更改 greeter 客户端,不要直接q?greeter 服地址Q而是配一?jawlb 服地址?br />同时更改 jawlb, 删除服务发现Q改为固定输出本机服务列表,定时切换?br />
greeter 是指 grpc-go 中的例子Qgrpc-go\examples\helloworld\greeter
## greeter 服更?br />
d参数指定服务端口?br />
```
package main
import (
"fmt"
"log"
"net"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"golang.org/x/net/context"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
// GreeterServer is used to implement helloworld.GreeterServer.
type GreeterServer struct {
}
// SayHello implements helloworld.GreeterServer
func (s *GreeterServer) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
msg := fmt.Sprintf("Hello %s from server-%d", in.Name, viper.GetInt("port"))
return &pb.HelloReply{Message: msg}, nil
}
func main() {
pflag.Int("port", 8000, "server bind port")
pflag.Parse()
viper.BindPFlags(pflag.CommandLine)
port := viper.GetInt("port")
addr := fmt.Sprintf(":%d", port)
lis, err := net.Listen("tcp", addr)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &GreeterServer{})
s.Serve(lis)
}
```
## greeter 客户端更?br />```
package main
import (
"context"
"log"
"os"
"time"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
_ "google.golang.org/grpc/balancer/grpclb"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/resolver"
"google.golang.org/grpc/resolver/manual"
)
const (
defaultName = "world"
)
func init() {
grpclog.SetLogger(logrus.New())
}
func main() {
rb := manual.NewBuilderWithScheme("whatever")
rb.InitialState(resolver.State{Addresses: []resolver.Address{
{Addr: "127.0.0.1:8888", Type: resolver.GRPCLB},
}})
conn, err := grpc.Dial("whatever:///this-gets-overwritten", grpc.WithInsecure(), grpc.WithBlock(),
grpc.WithResolvers(rb))
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
name := defaultName
if len(os.Args) > 1 {
name = os.Args[1]
}
for {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
r, err := c.SayHello(ctx, &pb.HelloRequest{Name: name})
cancel()
if err != nil {
log.Fatalf("could not greet: %v", err)
time.Sleep(time.Second)
continue
}
log.Printf("Greeting: %s", r.GetMessage())
time.Sleep(time.Second)
}
}
```
有以下更改:
* import _ "google.golang.org/grpc/balancer/grpclb"
* grpc.Dial("whatever:///this-gets-overwritten", grpc.WithResolvers(rb))
+ 采用一个自定义解析器,用来获取 jawlb 地址
+ Scheme("whatever") 可以LQ用作解析器名字
+ 目标 this-gets-overwritten 可以LQ因?jawlb 忽略了该名字
+ 127.0.0.1:8888 ?jawlb 地址
* 改ؓ每秒h一?br />
正常?grpclb 是在 DNS 中设|?SRV 记录Q?br />此处试避免讄 DNS, 采用了一个自定义解析器,
代码上多了几行?br />?DNS 讄的好处是, 可以直接解析为后?IP, 也可以添?grpclb, 代码上如同直接连接后端:
```
conn, err := grpc.Dial("dns:///myservice.domain.com", grpc.WithInsecure())
```
## jawlb 更改
### main.go
删除所有配|,改ؓ固定本机 8888 端口监听?br />
* 删除 `envconfig.MustProcess("JAWLB", &cfg)`
* listen() 改ؓ
```
func listen() (conn net.Listener, err error) {
conn, err = net.Listen("tcp", ":8888")
return
}
```
### watch.go
```
package main
import (
"context"
"fmt"
"net"
"time"
)
func watchService(ctx context.Context) (_ <-chan ServerList, err error) {
ch := make(chan ServerList)
go func() {
ticker := time.NewTicker(10 * time.Second)
i := 0
for {
select {
case <-ctx.Done():
ticker.Stop()
close(ch)
return
case <-ticker.C:
i += 1
fmt.Printf("i = %d\n", i)
ports := []int32{8010, 8020}
var servers []Server
for _, port := range ports {
servers = append(servers, Server{IP: net.ParseIP("127.0.0.1"), Port: port + int32(i%2)})
}
ch <- servers
} // select
} // for
}()
return ch, nil
}
```
删除所有服务发C码,改ؓ?0U切换端口:8010,8020 <-> 8011,8021
## q行
### jawlb
```
λ jawlb.exe
2020/04/16 15:35:17 waiting for TERM
i = 1
2020/04/16 15:35:27 endpoints:
2020/04/16 15:35:27 127.0.0.1:8011
2020/04/16 15:35:27 127.0.0.1:8021
i = 2
2020/04/16 15:35:37 endpoints:
2020/04/16 15:35:37 127.0.0.1:8010
2020/04/16 15:35:37 127.0.0.1:8020
```
### server
q行 4 个实例:
```
server --port 8010
server --port 8020
server --port 8011
server --port 8021
```
### client
```
λ client
INFO[0002] lbBalancer: handle SubConn state change: 0xc00008a590, CONNECTING
INFO[0002] Channel Connectivity change to CONNECTING
INFO[0002] lbBalancer: handle SubConn state change: 0xc00008a5f0, CONNECTING
INFO[0002] Subchannel picks a new address "127.0.0.1:8021" to connect
INFO[0002] Subchannel Connectivity change to READY
INFO[0002] lbBalancer: handle SubConn state change: 0xc00008a590, READY
INFO[0002] Channel Connectivity change to READY
INFO[0002] Subchannel Connectivity change to READY
INFO[0002] lbBalancer: handle SubConn state change: 0xc00008a5f0, READY
2020/04/16 15:37:47 Greeting: Hello world from server-8021
2020/04/16 15:37:48 Greeting: Hello world from server-8011
2020/04/16 15:37:49 Greeting: Hello world from server-8021
2020/04/16 15:37:50 Greeting: Hello world from server-8011
2020/04/16 15:37:51 Greeting: Hello world from server-8021
2020/04/16 15:37:52 Greeting: Hello world from server-8011
2020/04/16 15:37:53 Greeting: Hello world from server-8021
2020/04/16 15:37:54 Greeting: Hello world from server-8011
2020/04/16 15:37:55 Greeting: Hello world from server-8021
2020/04/16 15:37:56 Greeting: Hello world from server-8011
INFO[0012] lbBalancer: processing server list: servers:<ip_address:"\000\000\000\000\000\000\000\000\000\000\377\377\177\000\000\001" port:8020 > servers:<ip_address:"\000\000\000\000\000\000\000\000\000\000\377\377\177\000\000\001" port:8010 >
INFO[0012] lbBalancer: server list entry[0]: ipStr:|127.0.0.1|, port:|8020|, load balancer token:||
INFO[0012] lbBalancer: server list entry[1]: ipStr:|127.0.0.1|, port:|8010|, load balancer token:||
2020/04/16 15:37:57 Greeting: Hello world from server-8020
2020/04/16 15:37:58 Greeting: Hello world from server-8010
2020/04/16 15:37:59 Greeting: Hello world from server-8020
2020/04/16 15:38:00 Greeting: Hello world from server-8010
2020/04/16 15:38:01 Greeting: Hello world from server-8020
2020/04/16 15:38:02 Greeting: Hello world from server-8010
2020/04/16 15:38:03 Greeting: Hello world from server-8020
2020/04/16 15:38:04 Greeting: Hello world from server-8010
2020/04/16 15:38:05 Greeting: Hello world from server-8020
2020/04/16 15:38:06 Greeting: Hello world from server-8010
INFO[0022] lbBalancer: processing server list: servers:<ip_address:"\000\000\000\000\000\000\000\000\000\000\377\377\177\000\000\001" port:8021 > servers:<ip_address:"\000\000\000\000\000\000\000\000\000\000\377\377\177\000\000\001" port:8011 >
INFO[0022] lbBalancer: server list entry[0]: ipStr:|127.0.0.1|, port:|8021|, load balancer token:||
INFO[0022] lbBalancer: server list entry[1]: ipStr:|127.0.0.1|, port:|8011|, load balancer token:||
2020/04/16 15:38:07 Greeting: Hello world from server-8011
2020/04/16 15:38:08 Greeting: Hello world from server-8021
2020/04/16 15:38:09 Greeting: Hello world from server-8011
```
## l论
客户端应用一个自定义 resolver 解析 "whatever:///this-gets-overwritten"Q?br />获取?`{Addr: "127.0.0.1:8888", Type: resolver.GRPCLB}`,
知道q是一?grpclbQ于是按 load_balancer.proto 的定义查?jawlb 来获取后端地址列表?br />
jawlb ?10s 更新一ơ服务器列表Q每ơ输出多个地址。客L(fng)在多个地址间轮换请求?br />
## 其他试
* 不开 jawlbQ客L(fng)无法成功请求,直到 jawlb 开启才成功
* 中途关?jawlb, h仍会成功Q但是保持ؓ最后的服务器列?br /> + 同时会不断尝试重q?jawlb, 但是重连成功后没有切换服务,应该是个错误
* Dial() 不加 grpc.WithBlock() 参数, 报错Qall SubConns are in TransientFailure
```
λ client
INFO[0000] parsed scheme: "whatever"
INFO[0000] ccResolverWrapper: sending update to cc: {[{127.0.0.1:8888 <nil> 1 <nil>}] <nil> <nil>}
INFO[0000] ClientConn switching balancer to "grpclb"
INFO[0000] Channel switches to new LB policy "grpclb"
INFO[0000] lbBalancer: UpdateClientConnState: {ResolverState:{Addresses:[{Addr:127.0.0.1:8888 ServerName: Attributes:<nil> Type:1 Metadata:<nil>}] ServiceConfig:<nil> Attributes:<nil>} BalancerConfig:<nil>}
INFO[0000] parsed scheme: "grpclb-internal"
INFO[0000] ccResolverWrapper: sending update to cc: {[{127.0.0.1:8888 <nil> 0 <nil>}] <nil> <nil>}
INFO[0000] ClientConn switching balancer to "pick_first"
INFO[0000] Channel switches to new LB policy "pick_first"
INFO[0000] Subchannel Connectivity change to CONNECTING
INFO[0000] blockingPicker: the picked transport is not ready, loop back to repick
INFO[0000] pickfirstBalancer: HandleSubConnStateChange: 0xc00003fb10, {CONNECTING <nil>}
INFO[0000] Channel Connectivity change to CONNECTING
INFO[0000] Subchannel picks a new address "127.0.0.1:8888" to connect
INFO[0000] CPU time info is unavailable on non-linux or appengine environment.
INFO[0000] Subchannel Connectivity change to READY
INFO[0000] pickfirstBalancer: HandleSubConnStateChange: 0xc00003fb10, {READY <nil>}
INFO[0000] Channel Connectivity change to READY
INFO[0000] lbBalancer: processing server list: servers:<ip_address:"\000\000\000\000\000\000\000\000\000\000\377\377\177\000\000\001" port:8010 > servers:<ip_address:"\000\000\000\000\000\000\000\000\000\000\377\377\177\000\000\001" port:8020 >
INFO[0000] lbBalancer: server list entry[0]: ipStr:|127.0.0.1|, port:|8010|, load balancer token:||
INFO[0000] lbBalancer: server list entry[1]: ipStr:|127.0.0.1|, port:|8020|, load balancer token:||
INFO[0000] Subchannel Connectivity change to CONNECTING
INFO[0000] Subchannel Connectivity change to CONNECTING
INFO[0000] Channel Connectivity change to TRANSIENT_FAILURE
INFO[0000] lbBalancer: handle SubConn state change: 0xc00008a220, CONNECTING
INFO[0000] Channel Connectivity change to CONNECTING
INFO[0000] lbBalancer: handle SubConn state change: 0xc00008a280, CONNECTING
2020/04/16 16:40:06 could not greet: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
```
]]>
(金庆的专?2020.3)
?mgo ?Bulk 接口操作 mongodb 的代码,?mongodb 4.2 正常Q?br />而连 mongodb 2.6 报错Q?br />```
error parsing element 0 of field documents :: caused by :: wrong type for '0' field, expected object, found 0: null
```
代码如下Q?br />```
func (u *Util) AddUsers(userIDs []int) error {
docs := make([]interface{}, len(userIDs))
for _, userID := range userIDs {
docs = append(docs, dbdoc.UsersDoc{
UserID: userID,
})
}
bulk := u.c().Bulk()
bulk.Insert(docs...)
_, err := bulk.Run()
return err
}
```
试着Ҏ(gu)如下调用成功:
```
func (u *Util) AddUsers(userIDs []int) error {
bulk := u.c().Bulk()
for _, userID := range userIDs {
bulk.Insert(dbdoc.UsersDoc{
UserID: userID,
})
}
_, err := bulk.Run()
return err
}
```
?Insert() 无论是多个还是单个参敎ͼ底层实现是一L(fng)?br />
输入单个?[]int{123} q是出错.
?Insert(docs...) Ҏ(gu) Insert(docs[0]) 时发现元素ؓI?br />最l找C错误的代码行Q?br />```
- docs := make([]interface{}, len(userIDs))
+ docs := make([]interface{}, 0, len(userIDs))
```
高版本的mongodbl果正确应该是他忽略了空的文?br />
感觉 golang make() 卛_?个参C可以3个参数调用的设计是个错误?br />
]]>
(金庆的专?2020.3)
导出 channel 的包都比较难用。^怹不会接口设计成 channel.
以下摘自Qhttps://studygolang.com/articles/12135?fr=sidebar
不要导出q发原语
Go 提供了非常易于用的q发原语Q这也导致了它被q度的用。我们主要担心的?channel ?sync package 。有的时候我们会导出一?channel l用户用。另外一个常见的错误是在?sync.Mutex 作ؓl构体字D늚时候没有把它设|成U有。这q不L很糟p,不过在写试的时候却需要考虑的更加全面?br />
当我们导?channel 的时候我们就个包的用户带来了试上的一些不必要的麻烦。你每导Z?channel 是在提高用户在试时候的隑ֺ。ؓ了写出正的试Q用户必考虑q些Q?br />
什么时候数据发送完成?br /> 在接受数据的时候是否会发生错误?br /> 如果需要在包中清理使用q的channel的时候该怎么做?br /> 如何?API 装成一个接口,使我们不用直接去调用它?br />
]]>
(金庆的专?2020.2)
假设应用名ؓ MyMod, d数的文g?main.go, 那就创徏如下?main_test.go 文gQ?br />```
package main
/* 试整个服务器?/span>
go test -c -covermode=count -coverpkg ./...
生成 MyMod.test.exe. 复制?bin 目录下运行:
MyMod.test.exe --systemTest --test.coverprofile MyMod.cov
生成代码覆盖试l果 MyMod.cov, 需要在 go.mod 理的目录下执行
go tool cover -html=MyMod.cov -o MyMod.html
打开 MyMod.html 查看l果?/span>
*/
import (
"flag"
"fmt"
"testing"
)
var systemTest *bool
func init() {
systemTest = flag.Bool("systemTest", false, "Set to true when running system tests")
}
// Test started when the test binary is started. Only calls main.
func TestSystem(t *testing.T) {
if *systemTest {
fmt.Println("Test system...")
main()
}
}
```
MyMod.cov 会积累,可加?Git.
MyMod.cov 提交前,需对其排序Q方便查?diff.
排序脚本 sort_cov.bat 如下Q?br />```
head -1 MyMod.cov > tmp.cov
cat MyMod.cov | sed '1d' | sort >> tmp.cov
cp tmp.cov MyMod.cov
del tmp.cov
pause
```
]]>
(金庆的专?2020.2)
github ?consistent, 按星Cơ察?br />## stathat/consistent
Consistent hash package for Go.
688?
stathat 应该是个数据l计的云服务。consistent被认为是生可用的?br />所有文档都?godoc. CZ和接口简z易懂?br />## lafikl/consistent
A Go library that implements Consistent Hashing and Consistent Hashing With Bounded Loads.
575 ?br />
有界负蝲的一致性哈希算?br />
CZ中是 c.GetLeast() 获取最负载,不像是一致性hash的应用?br />从代码看Q应该是最q的未满载,不是最负载?br />
`MaxLoad()`说明昄最大负载是自动讄?(total_load/number_of_hosts)*1.25?br />
`const replicationFactor = 10`
没有一ơ获取多个的接口?br />## serialx/hashring
Consistent hashing "hashring" implementation in golang (using the same algorithm as libketama)
367 ?br />
是从 Python 库移植的。可以设|?weight.
CZ
```
replicaCount := 3
ring := hashring.New(serversInRing)
server, _ := ring.GetNodes("my_key", replicaCount)
```
实际上是 GetN 的功能。内部实现缺?replicationFactor## buraksezer/consistent
Consistent hashing with bounded loads in Golang
263 ?br />
未提供缺省hash函数?br />
多了一个`PartitionCount`
c.loads 仅用于查询负载,q没有用于有界负载?br />
l论是这个不是有界负载的一致性哈希,同时一致性哈希的实现不同d?br />## 其他
jump hash 不能U除节点Q不考虑### dgryski/go-jump
go-jump: Jump consistent hashing
281 ?br />### lithammer/go-jump-consistent-hash
⚡️ Fast, minimal memory, consistent hash algorithm
138 ?img src ="http://www.shnenglu.com/jinq0123/aggbug/217148.html" width = "1" height = "1" />
]]>
(金庆的专?2020.2)
golang 中的接口如下Q?br />
```
type Writer interface {
Write func(p []byte) (n int, err error)
}
```
一般API参数要求一个接口,而不是一个函数指针,?io.Copy() 需要输入一?Writer ?ReaderQ?br />```
func Copy(dst Writer, src Reader) (written int64, err error)
```
而不是这?个函数指针:
```
func CopyWithFunc(writeFunc func([]byte) (int, error), readRunc func([]byte) (int, error)) (written int64, err error)
```
大家l一使用接口Q而不是接口和函数指针L(fng)Q可以避免API复杂化?br />?io.Copy() ?个参敎ͼ如果要支持接口和函数指针L(fng)Q就会变??Copy() 重蝲?br />golang 没有重蝲Q就只能?个不同的函数名?br />
在实际用中Q需要将函数转化成接口,才能调用 io.Copy().
如有一个函?
```
func MyWriteFunction(p []byte) (n int, err error) {
fmt.Print("%v",p)
return len(p),nil
}
```
调用 io.Copy() 旉要创Z?WriterQƈ该函数指针转型为Writer后用?br />q里?`WriteFunc` cd实现 Writer?br />
```
type WriteFunc func(p []byte) (n int, err error)
func (wf WriteFunc) Write(p []byte) (n int, err error) {
return wf(p)
}
```
WriteFunc 本n是个?MyWriteFunction 同类型的函数cdQ同时实C Writer 接口?br />所?MyWriteFunction 可以直接转成WriteFunccd成ؓ一?Writer.
q样可以调?io.Copy() 了:
```
io.Copy(WriteFunc(MyWriteFunction), strings.NewReader("Hello world"))
```
参考:https://stackoverflow.com/questions/20728965/golang-function-pointer-as-a-part-of-a-struct
]]>
(金庆的专?2020.1)
golang E序中检到 DATA RACE, ?chan 关闭和发送冲H:
==================
WARNING: DATA RACE
Write at 0x00c000098010 by goroutine 68:
runtime.closechan()
/usr/lib/golang/src/runtime/chan.go:327 +0x0
valky/common/tcp.(*Session).Close()
/var/tmp/src/f4f4f712-7894-4d98-83dd...
valky/common/tcp.(*Session).recvloop()
/var/tmp/src/f4f4f712-7894-4d98-83dd...
Previous read at 0x00c000098010 by goroutine 100:
runtime.chansend()
/usr/lib/golang/src/runtime/chan.go:140 +0x0
valky/common/tcp.(*Session).Send()
/var/tmp/src/f4f4f712-7894-4d98-83dd...
main.(*Role).sendMsg()
/var/tmp/src/f4f4f712-7894-4d98-83dd...
==================
Found 1 data race(s)
查了一?chan 关闭的正做法,发现了一非常详l的文章Q?br />[How to Gracefully Close Channels](https://go101.org/article/channel-closing.html)
文中指出Qchan 多次关闭Q或者在关闭?chan 上发送,都会 panic.
上面?DATA RACE 属于q运Q没?panic?br />
chan 关闭时的原则是:不要在接收协E中关闭Qƈ且,如果有多个发送者时׃要关闭chan了?br />
上面的DATA RACE 是在接收协程中关闭chan.
文中详细列出了多U方案关闭chan.
如果_暴点,可以直接加个 recover. 其他Ҏ(gu)都是要保证发送完成后再关闭?br />
]]># open-match匚w程
(金庆的专?2019.1)
https://github.com/GoogleCloudPlatform/open-match
open-match 是一个通用的游戏匹配框架?br />由游戏提供自定义的匹配算法(以docker镜像的方式提供)?br />
分ؓ多个q程Q各q程之间׃n一?redis.
* 前端, 接收玩家加入 redisQ成功后通知玩家戉K服地址
* 后端Q设|一局游戏的匹配规则,讄戉K服地址
* MMFOrcQ启动匹配算?MMF)
* MMF, 自定义匹配算法,d redis 获取玩家Q匹配成功就结果写?redis. 仅匹配一局退出?br />
游戏服中q接 open-match 的前端与后端的进E,分别UCؓ frontendclient ?Director?br />输入?部䆾Q一是玩家信息,二是对局信息?br />Director 向后端输入对局信息Q就会收C个接一个的对局人员列表.
Director 需要ؓ每个对局开戉KQ然后通知后端戉K地址?br />后端房间地址写入 redis, 然后前端d到房间地址Q就通知 frontendclientQ让玩家q入戉K?br />## test/cmd/frontendclient
模拟大厅服或l队服,q接前端API, h匚w玩家/队伍。成功后收到房间服(DGS)的地址(Assignment)?br />
Player 实际上是一个队伍,其中ID字段是用I格分隔的多个ID.
虽然参数cd都是 Player, CreatePlayer() 参数为整个队伍,?GetUpdates() 参数是单个玩家?br />
main() 中创建多个玩Ӟ每个玩家调用 GetUpdates() 以获取结果,go waitForResults() 中处理结果?br />waitForResult() d中的匹配结果,压入 resultsChanQ但好像 resultsChan 仅用于打华ͼ?br />所有玩家合q到 g 实例中,然后调用 CreatePlayer() h匚w?br />
cleanup() 调用 DeletePlayer() 来删除匹配请求,不仅需删除整个队伍Q也需要删除单个玩家?br />
好像最后取l果没取对地方,应该?resultChan 中获?Assignment, q用该地址 udpClient().
看了该示例就可以理解 frontend.proto## examples/backendclient
MatchObject.Properties 是从 testprofile.json d的,应该改名?Profile 是否更好点?
pbProfile ?MatchObjectQProfile {同?MatchObject?
Profile 的定义是 MMF 所需的所有参数?br />`pbProfile.Properties = jsonProfile` 重复?遍?br />
ListMatches()列出q个Profile的所有匹配?br />收到一个匹配后Q须用CreateAssignments()房间服地址, UCؓ Assignment, 发送到所有游戏客L(fng)?br />## cmd/frontendapi
CreatePlayer() ?Player 对象写入 redis, 键gؓ Player.Id, cd?HSET?br />?Player 的每?attributeQ添加到 ZSET 中去?br />此处 Player 是一l玩家?br />
GetUpdates() 每隔2sdredis, Player数据有变化时发送。此?Player 是单个玩家?br />
如果CreatePlayer()中队伍只有一个玩Ӟ
则写入的Player与GetUpdates()中读取的玩家是同一个redis键?br />## cmd/backendapi
CreateMatch() ?profile cd?MatchObject, 是一个比赛的限制条g?br />profile 先写?redis, 键ؓ profile.Id.
`requestKey := xid() + "." + profile.Id`,
q将 requestKey 加入 redis 集合 "profileq"?br />然后?s查询 redis, 看是否有 requestKey 键出玎ͼq返回该倹{?br />
ListMatch() ?s调用一?CreateMatch().
DeleteMatch() 仅仅删除 Id q个键?br />
CreateAssignments() 为多个队伍设|Assignment, x间地址?br />遍历所有Roster中的Player对象Q在redis中设|Assignment.
(Assignment 更改后,会触发前端更新?
所?Player.Id ?"proposed" Ud "deindexed"Q这两个?ZSET, 分gؓ加入旉?br />Roster 应该是比赛中的阵营,如红方,蓝方Q每个阵营中可有多个队伍?br />
DeleteAssignments() 仅仅遍历所?Player 对象来删?Assignment 字段?br />## cmd/mmforc
匚w程是由 mmforc (matchmaking function orchestrator) 控制的?br />
mmforc 每秒?redis ?profileq 中取?100 个成? 其中 profileq 是个setcdQ?br />使用命o为`SPOP profileq 100`.
Ҏ(gu)?profile, 创徏一?k8s dQ?br />
```
// Kick off the job asynchrnously
go mmfunc(ctx, profile, cfg, defaultMmfImages, clientset, &pool)
```
每隔10s, q有所有匹配Q务都完成后,需?`checkProposals`, 卛_?evaluator d?br />
profileq 中的元素 profile 为字W串QmatchObjectID.profileID?br />?profileID 为键Q可以从 redis d profile 的内? profile 是个 MatchObject 对象?br />
profile 的内容ؓ json Ԍ其中 "jsonkeys.mmfImages" ?mmf (matchmaking function) 镜像?br />
如果profiledp|Q或?mmfImages 为空Q则使用默认的镜像。mmfImages 未来会支持多个镜像?br />
通过 MMF_* 环境变量传入各种参数.## mmf
CZQexamples\functions\golang\manual-simple
从环境变?"MMF_PROFILE_ID" 解析?profileID, q向 redis 查询(HGETALL) profileQHSET cd?br />
?profile 中取 pools 字段Q即匚w条g?br />pools 分ؓ多个 pool, 每个 pool 中有多个 filter, 每个 filter ?redis 取符合的 Player.
profile 用到以下字段Q?br />
* "properties.playerPool"
jsonԌ是一些过滤条Ӟ?#8220;mmr: 100-999”
* "properties.roster"
json? 是多个队伍大,?“red: 4”
CZ见:`examples\backendclient\profiles\testprofile.json`### 单匹配过E?/h3>
simple mmf 的匹配过E如下:
1. ?redis 查询 profileQ获取过滤条件和各队伍大?br />1. 每个qo条g?redis 查询Q所有结果的交集为可选成?br />1. 去除 ignoreList, xq?800s 内已匚w成功的成员,?proposal ?deindexed ZSET 列表?br />1. 如果可选成员个数太,?insufficient_players q?br />1. 分配各个队伍成员
1. ?redis 记录l果### l果
profile 中添?rosterQ即各阵营成员名单,存入 prososalKey.
保存不分队伍的成员名单?br />然后?"proposalq" d prososalKey### l节
poolRosters ?(pool? filter attribute) 为键Qgؓ Player ID 列表.
保存?redis 查询的符合条件的 Player ID.
overlaps ?pool 名ؓ键,保存W合该pool中所有filter?Player ID 列表Q去?ignore list.
rosters ?profile 中的 "properties.rosters" 字段。不知何用?
遍历 rosters, 为每个阵营的每个player扑ֈ对应pool的PlayerID, 保存?mo.Rosters.
其中 profileRosters 好像没用?br />
]]>
Go 1.11 支持 module.
代码不需要在 GOPATH/src 目录下?br />
先初始化模块Q生?`go.mod`
E:\temp
λ mkdir -p testmod\hello
E:\temp
λ cd testmod\hello\
E:\temp\testmod\hello
λ go mod init github.com/jinq0123/hello
go: creating new go.mod: module github.com/jinq0123/hello
创徏 `hello.go`
package main
import (
"fmt"
"rsc.io/quote"
)
func main() {
fmt.Println(quote.Hello())
}
构徏时报 `golang.org/x/text` q不上:
E:\temp\testmod\hello
λ go build
go: golang.org/x/text@v0.0.0-20170915032832-14c0d48ead0c: unrecog
nized import path "golang.org/x/text" (https fetch: Get https://g
olang.org/x/text?go-get=1: dial tcp 216.239.37.1:443: connectex:
A socket operation was attempted to an unreachable network.)
go: error loading module requirements
`go.mod` d
replace golang.org/x/text => github.com/golang/text v0.3.0
然后构徏成功了Q?br />
E:\temp\testmod\hello
λ go build
go: finding github.com/golang/text v0.3.0
go: downloading rsc.io/sampler v1.3.0
go: downloading github.com/golang/text v0.3.0
E:\temp\testmod\hello
λ hello.exe
Hello, world.
如果不加版本P则会报错Q?br />
go.mod:9: replacement module without version must be directory path (rooted or starting with ./ or ../)
Go 1.11.1 replace q有问题Q仍会试图连接原地址。目前版?1.11.4 可以用?br />
参考:
https://github.com/golang/go/wiki/Modules
]]>
(金庆的专?2018.10)
?gotest q行一个测试,往 mongodb 中插入一条,发现有时灵,有时不灵?br />
因ؓ错误地怀?mgo 用错了,耗费了不时间?br />最l发现是因ؓ gotest 是有~存的,输出的是上次q行的结果,但是q没有实际运行代码?br />
q行有效是因Z码刚改过Q测试时会实际运行?br />
最l也是无意间发现的。给 mgo 开启了调试日志Q然后比?ơ运行,发现输出是一L(fng)Q?br />只有一行不同:
ok mail-server/server 0.519s
ok mail-server/server (cached)
明确昄了第2ơ是~存。前面运行了几十ơ都忽略?cached q个输出?br />
Z止~存Q可加上 -count=1 参数Q?br />go test -count=1
]]>
(金庆的专?2018.9)
服务用NodePort暴露到外|,为避免端口冲H,不指定NodePort,
而是让k8s自动选择一个端口?br />
$ cat get_node_port.yaml
kind: Service
apiVersion: v1
metadata:
name: jq-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
$ kubectl apply -f get_node_port.yaml
service "jq-service" configured
$ kubectl describe svc/jq-service
Name: jq-service
Namespace: default
Labels: <none>
Annotations: kubectl...
Selector: app=MyApp
Type: NodePort
IP: 10.104.228.187
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32115/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
可以看到k8s分配了NodePort 32115?br />
然后需要获取这个动态的NodePortQ以通知客户端连接该端口?br />
package main
import (
"context"
"fmt"
"log"
"io/ioutil"
"github.com/ghodss/yaml"
"github.com/ericchiang/k8s"
corev1 "github.com/ericchiang/k8s/apis/core/v1"
)
func main() {
data, err := ioutil.ReadFile("config")
if err != nil {
panic(err)
}
// Unmarshal YAML into a Kubernetes config object.
var config k8s.Config
if err := yaml.Unmarshal(data, &config); err != nil {
panic(err)
}
client, err := k8s.NewClient(&config)
// client, err := k8s.NewInClusterClient()
if err != nil {
log.Fatal(err)
}
var svc corev1.Service
if err := client.Get(context.Background(), "default", "jq-service", &svc); err != nil {
log.Fatal(err)
}
fmt.Printf("%d\n", *svc.Spec.Ports[0].NodePort)
}
q行旉要复制config: `cp ~/.kube/config .`
]]>
(金庆的专?2018.8)
grpc-go 中如下连接服务器Q请求将在多个IP之间轮{?br />
conn, err := grpc.Dial(
"dns:///rng-headless:8081",
grpc.WithBalancerName(roundrobin.Name),
grpc.WithInsecure())
标准的目标名应该是这L(fng)Q`"dns://authority/endpoint_name"`,
此处 authority 为空Q详见:https://github.com/grpc/grpc/blob/master/doc/naming.md
服务器开3个实例,所有请求在3个实例上轮{Q?br />
[jinqing@host-10-2-3-4 RoundRobin]$ kubectl run -it --rm jinqing-roundrobin --image=jinq0123/roundrobin:4
If you don't see a command prompt, try pressing enter.
2018/08/28 10:18:01 request 7754383576636566559
2018/08/28 10:18:02 request 2543876599219675746
2018/08/28 10:18:03 request 927204261937181213
2018/08/28 10:18:04 request 7754383576636566559
2018/08/28 10:18:05 request 2543876599219675746
2018/08/28 10:18:06 request 927204261937181213
...
服务器返回一个随机数Q不同实例的随机C同。代码是?br />https://github.com/kcollasarundell/balancing-on-k8s 修改的?br />
...
const (
port = ":8081"
)
type server struct{}
var r int64
func init(){
rand.Seed(time.Now().UnixNano())
r = rand.Int63()
}
func (s *server) Rng(context.Context, *rng.Source) (*rng.RN, error) {
return &rng.RN{RN: r}, nil
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
rng.RegisterRngServer(s, &server{})
// Register reflection service on gRPC server.
reflection.Register(s)
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
先编译,打包成镜像,然后?`balancing-on-k8s\backend\kube.yaml` q行Q?br />kubectl apply -f kube.yaml
`backend\kube.yaml` 创徏了一?ClusterIP 服务和一?Headless 服务Q部|了 3 个服务器实例?br />[jinqing@host-10-2-3-4 RoundRobin]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93d
rng-cluster ClusterIP 10.111.30.205 <none> 8081/TCP 4h
rng-headless ClusterIP None <none> 8081/TCP,8080/TCP 4h
客户端是一个简单的grpc, 定时发送请求,打印q回的随机数?br />`balancing-on-k8s\clientSideBalancer\RoundRobin\main.go`中的地址需要添加端口,
不然grpc会去q接 443 端口而失败?br />
扩容后,到大概3分钟后才看到负蝲转移。羃容后会立即生效?br />kubectl scale --replicas=5 deployment/rng
如果?ClusterIP 服务, 则服务名对应一个ClusterIP;
如果?Headless 服务Q则服务名对应各个Pod的IP:
/ # nslookup rng-headless
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: rng-headless.default.svc.cluster.local
Address: 10.244.3.27
Name: rng-headless.default.svc.cluster.local
Address: 10.244.0.108
Name: rng-headless.default.svc.cluster.local
Address: 10.244.2.66
/ # nslookup rng-cluster
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: rng-cluster.default.svc.cluster.local
Address: 10.111.30.205
/ #
如果去除 "dns:///", 仅仅是域名加端口Q?br />
conn, err := grpc.Dial(
"rng-headless:8081",
grpc.WithBalancerName(roundrobin.Name),
...
则只会请求同一个实例。只有当该实例pod被删除后才会切换到另一个实例?br />使用~容时发C优先删除没有客户端连接的实例?br />?个客L(fng)q接C同服务器实例Q然后羃容ؓ1实例Q就可以看到h切换?br />
如果客户端和服务器数量很大,q个dns负蝲均衡׃合适了Q因为客L(fng)会连接每个服务器实例?br />
参考:
Exploring Kubernetes Service Discovery and loadbalancing ( https://kca.id.au/post/k8s_service/ )
]]>
(金庆的专?2018.7)
集群内客L(fng)需要打包成docker镜像Q上传镜像,然后?kubectl run q行Q?br />q要讄用户角色Q太ȝQ还是用集群外客L(fng)试比较方便?br />
客户端库使用 ericchiang/k8s, 比官方的 client-go 要简单许多?br />
集群内客L(fng)使用`k8s.NewInClusterClient()`创徏Q?br />集群外客L(fng)使用 `NewClient(config *Config)`, 需要输入配|,
配置是?~/.kube/config d的?br />参?https://github.com/ericchiang/k8s/issues/79
代码如下Q?br />
package main
import (
"context"
"fmt"
"log"
"io/ioutil"
"github.com/ghodss/yaml"
"github.com/ericchiang/k8s"
corev1 "github.com/ericchiang/k8s/apis/core/v1"
)
func main() {
data, err := ioutil.ReadFile("config")
if err != nil {
panic(err)
}
// Unmarshal YAML into a Kubernetes config object.
var config k8s.Config
if err := yaml.Unmarshal(data, &config); err != nil {
panic(err)
}
client, err := k8s.NewClient(&config)
// client, err := k8s.NewInClusterClient()
if err != nil {
log.Fatal(err)
}
var nodes corev1.NodeList
if err := client.List(context.Background(), "", &nodes); err != nil {
log.Fatal(err)
}
for _, node := range nodes.Items {
fmt.Printf("name=%q schedulable=%t\n", *node.Metadata.Name, !*node.Spec.Unschedulable)
}
}
yaml 库用?ghodss/yamlQ不能用 go-yaml, 不然报错
`yaml: unmarshal errors`
见:https://github.com/ericchiang/k8s/issues/81
复制 .kube/config 到运行目录,q行列出所有节点:
[jinqing@host-10-1-2-19 out-cluster]$ cp ~/.kube/config .
[jinqing@host-10-1-2-19 out-cluster]$ ./out-cluster
name="host-10-1-2-20" schedulable=true
name="host-10-1-2-21" schedulable=true
name="host-10-1-2-22" schedulable=true
name="host-10-1-2-19" schedulable=true
]]>
(金庆的专?2018.6)
摘自Q?br />https://www.ardanlabs.com/blog/2017/02/package-oriented-design.html
If a package wants to import another package at the same level:
* Question the current design choices of these packages.
* If reasonable, move the package inside the source tree for the package that wants to import it.
* Use the source tree to show the dependency relationships.
]]>
(金庆的专?2018.6)
摘自Q?br />
https://talks.golang.org/2014/organizeio.slide#1
The name of a package
Keep package names short and meaningful.
Don't use underscores, they make package names long.
io/ioutil not io/util
suffixarray not suffix_array
Don't overgeneralize. A util package could be anything.
The name of a package is part of its type and function names.
On its own, type Buffer is ambiguous. But users see:
buf := new(bytes.Buffer)
Choose package names carefully.
Choose good names for users.
]]>
(金庆的专?2018.6)
grpc-go服务器的每个h都在一个独立的协程中执行?br />|游服务器中Q一般请求会调用游戏戉K的方法,而房间是一个独立的协程?br />可以房间实CؓactorQgrpch通过Call()或Post()Ҏ(gu)来执行?br />其中Call()会等待返回,而Post()会异步执行无q回倹{?br />
type Room struct {
// actC 是其他协E向Room协程发送动作的ChannelQ协E中依ơ执行动作?/span>
// Action 动作, 是无参数无返回值的函数.
actC chan func()
...
}
// Run q行戉K协程.
func (r *Room) Run() {
ticker := time.NewTicker(20 * time.Millisecond)
defer ticker.Stop()
for r.running {
select {
case act := <-r.actC:
act()
case <-ticker.C:
r.tick()
}
}
}
// Call calls a function f and returns the result.
// f runs in the Room's goroutine.
func (r *Room) Call(f func() interface{}) interface{} {
// l果从chq回
ch := make(chan interface{}, 1)
r.actC <- func() {
ch <- f()
}
// {待直到q回l果
return <-ch
}
// Post 一个动作投递到内部协程中执?
func (r *Room) Post(f func()) {
r.actC <- f
}
grpc服务Ҏ(gu)如:
func (m *RoomService) Test(ctx context.Context, req *pb.TestReq) (*pb.TestResp, error) {
conn := conn_mgr.GetConn(ctx)
if conn == nil {
return nil, fmt.Errorf("can not find connection")
}
room := conn.GetRoom()
resp := room.Call(func() interface{} {
return room.Test(req)
})
return resp.(*pb.TestResp), nil
}
]]>