对于谷粒商城的高级篇笔记总结…
ELASTICSEARCH 简介 https://www.elastic.co/cn/what-is/elasticsearch 全文搜索属于最常见的需求,开源的 Elasticsearch 是目前全文搜索引擎的首选。 它可以快速地储存、搜索和分析海量数据。维基百科、Stack Overflow、Github 都采用它
Elastic 的底层是开源库 Lucene。但是,你没法直接用 Lucene,必须自己写代码去调用它的接口。Elastic 是 Lucene 的封装,提供了 REST API 的操作接口,开箱即用。 REST API:天然的跨平台。 官方文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html 官方中文:https://www.elastic.co/guide/cn/elasticsearch/guide/current/foreword_id.html
社区中文:https://es.xiaoleilu.com/index.html http://doc.codingdict.com/elasticsearch/0/
基本概念 1、Index(索引)
动词,相当于 MySQL 中的 insert;
名词,相当于 MySQL 中的 Database
2、Type(类型) 在 Index(索引)中,可以定义一个或多个类型。 类似于 MySQL 中的 Table;每一种类型的数据放在一起;
3、Document(文档) 保存在某个索引(Index)下,某种类型(Type)的一个数据(Document),文档是 JSON 格式的,Document 就像是 MySQL 中的某个 Table 里面的内容;
ElasticSearch7-去掉type概念
关系型数据库中两个数据表示是独立的,即使他们里面有相同名称的列也不影响使用,但ES中不是这样的。elasticsearch是基于Lucene开发的搜索引擎,而ES中不同type下名称相同的filed最终在Lucene中的处理方式是一样的。
两个不同type下的两个user_name,在ES同一个索引下其实被认为是同一个filed,你必须在两个不同的type中定义相同的filed映射。否则,不同type中的相同字段名称就会在处理中出现冲突的情况,导致Lucene处理效率下降。
去掉type就是为了提高ES处理数据的效率。
Elasticsearch 7.x
URL中的type参数为可选。比如,索引一个文档不再要求提供文档类型
Elasticsearch 8.x
解决:将索引从多类型迁移到单类型,每种类型文档一个独立索引
倒排索引
1、安装elastic search dokcer中安装elastic search
(1)下载ealastic search(存储和检索数据)和kibana(可视化检索数据)
1 2 docker pull elasticsearch:7.6.2 docker pull kibana:7.6.2
(2)配置
1 2 3 4 mkdir -p /mydata/elasticsearch/config mkdir -p /mydata/elasticsearch/data echo "http.host: 0.0.0.0" >> /mydata/elasticsearch/config/elasticsearch.yml chmod -R 777 /mydata/elasticsearch/
(3)启动Elastic search
1 2 3 4 5 6 7 docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \ -e "discovery.type=single-node" \ -e ES_JAVA_OPTS="-Xms64m -Xmx512m" \ -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \ -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \ -d elasticsearch:7.6.2
–name:设置容器名称
-p:9200是发送http请求,rustAPI,9300是ES在分布式集群状态下节点间的通信端口
-e:运行模式,ES_JAVA_OPTS不指定会将内存全部占用
-v:进行挂载,将容器中的配置文件和外部的虚拟机配置文件进行关联
设置开机启动elasticsearch
1 docker update elasticsearch --restart=always
(4)启动kibana:
1 docker run --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.56.10:9200 -p 5601:5601 -d kibana:7.6.2
设置开机启动kibana
1 docker update kibana --restart=always
(5)测试
查看elasticsearch版本信息: http://192.168.56.10:9200
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "name" : "1e3900cda632" , "cluster_name" : "elasticsearch" , "cluster_uuid" : "zAxedSGQSgC86bmYA72C9Q" , "version" : { "number" : "7.6.2" , "build_flavor" : "default" , "build_type" : "docker" , "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f" , "build_date" : "2020-03-26T06:34:37.794943Z" , "build_snapshot" : false , "lucene_version" : "8.4.0" , "minimum_wire_compatibility_version" : "6.8.0" , "minimum_index_compatibility_version" : "6.0.0-beta1" } , "tagline" : "You Know, for Search" }
显示elasticsearch 节点信息http://#:9200/_cat/nodes ,
1 127.0 .0 .1 13 92 7 0.06 0.21 0.20 dilm * 1e3900 cda632
访问Kibana:http://192.168.56.10:5601/app/kibana#/home
2、初步检索 1)_CAT (1)GET/_cat/nodes:查看所有节点
如:http://192.168.56.10:9200/_cat/nodes :
1 127 .0 .0 .1 15 91 3 0 .13 0 .38 0 .31 dilm * 1 e3900cda632
注:*表示集群中的主节点
(2)GET/_cat/health:查看es健康状况
如: http://192.168.56.10:9200/_cat/health
1 1648604850 01 :47 :30 elasticsearch green 1 1 3 3 0 0 0 0 - 100 .0 %
注:green表示健康值正常
(3)GET/_cat/master:查看主节点
如: http://192.168.56.10:9200/_cat/master
1 Urxz2dOfSgCRyzGzs -7 l6Q 127.0.0.1 127.0.0.1 1 e3900cda632
(4)GET/_cat/indicies:查看所有索引 ,等价于mysql数据库的show databases;
如: http://192.168.56.10:9200/_cat/indices
1 2 3 green open .kibana_task_manager_1 X9B74aaIS9KHLlPUrYLVWA 1 0 2 0 34.2 kb 34.2 kb green open .apm-agent-configuration ZXdJradmQcG-fbLFmRydKw 1 0 0 0 283 b 283 b green open .kibana_1 9 uZjKicuSPqv5qUSMWes3Q 1 0 7 0 34.5 kb 34.5 kb
2)索引一个文档 保存一个数据,保存在哪个索引的哪个类型下(相当于保存在那个数据库的那张表上),指定用那个唯一标识 PUT customer/external/1;在customer索引下的external类型下保存1号数据为
PUT和POST都可以POST新增。 如果不指定id,会自动生成id。指定id就会修改这个数据,并新增版本号;PUT可以新增也可以修改。PUT必须指定id ;由于PUT需要指定id,我们一般用来做修改操作,不指定id会报错。
下面是在postman中的测试数据:
创建数据成功后,显示201 created表示插入记录成功。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 { "_index" : "customer" , "_type" : "external" , "_id" : "1" , "_version" : 1 , "result" : "created" , "_shards" : { "total" : 2 , "successful" : 1 , "failed" : 0 } , "_seq_no" : 0 , "_primary_term" : 1 }
这些返回的JSON串的含义;这些带有下划线开头的,称为元数据,反映了当前的基本信息。
“_index”: “customer” 表明该数据在哪个数据库下;
“_type”: “external” 表明该数据在哪个类型下;
“_id”: “1” 表明被保存数据的id;
“_version”: 1, 被保存数据的版本
“result”: “created” 这里是创建了一条数据,如果重新put一条数据,则该状态会变为updated,并且版本号也会发生变化。
下面选用POST方式:
添加数据的时候,不指定ID,会自动的生成id,并且类型是新增:
再次使用POST插入数据,仍然是新增的:
添加数据的时候,指定ID,会使用该id,并且类型是新增:
再次使用POST插入数据,类型为updated
3)查看文档 GET /customer/external/1
http://192.168.56.10:9200/customer/external/1
1 2 3 4 5 6 7 8 9 10 11 12 { "_index" : "customer" , "_type" : "external" , "_id" : "1" , "_version" : 2 , "_seq_no" : 1 , "_primary_term" : 1 , "found" : true , "_source" : { "name" : "John Doe" } }
通过“if_seq_no=1&if_primary_term=1 ”,当序列号匹配的时候,才进行修改,否则不修改。
实例:将id=1的数据更新为name=1,然后再次更新为name=2,起始_seq_no=1,_primary_term=1
(1)将name更新为1
http://192.168.56.10:9200/customer/external/1?if_seq_no=1&if_primary_term=1
(2)将name更新为2,更新过程中使用seq_no=1
http://#:9200/customer/external/1?if_seq_no=1&if_primary_term=1
出现更新错误。
(3)查询新的数据
http://192.168.56.10:9200/customer/external/1
能够看到_seq_no变为7。(ps.中间有多次更新操作,这里就从seq_no为7来接着操作)
(4)再次更新,更新成功
http://192.168.56.10:9200/customer/external/1?if_seq_no=7&if_primary_term=1
4)更新文档 (1)POST更新文档,带有_update
1 2 3 4 5 6 POST customer/external/1 /_update { "doc" : { "name" : "John Doew" } }
http://192.168.56.10:9200/customer/external/1/_update
如果再次执行更新,则不执行任何操作,序列号也不发生变化
POST更新方式,会对比原来的数据,和原来的相同,则不执行任何操作(version和_seq_no)都不变。
(2)POST更新文档,不带_update
1 2 3 4 POST customer/external/1 { "name" : "John Doew2" }
在更新过程中,重复执行更新操作,数据也能够更新成功 ,不会和原来的数据进行对比。
(3)PUT更新文档,无_update
1 2 3 4 PUT customer/external/1 { "name" : "John Doew3" }
在更新过程中,重复执行更新操作,数据也能够更新成功 ,不会和原来的数据进行对比。
5)删除文档或索引 1 2 DELETE customer/external/1 DELETE customer
注:elasticsearch并没有提供删除类型的操作,只提供了删除索引和文档的操作。
实例:删除id=1的数据,删除后继续查询
实例:删除整个costomer索引数据
删除前,所有的索引
1 2 3 4 green open .kibana_task_manager_1 X9B74aaIS9KHLlPUrYLVWA 1 0 2 0 34.2kb 34.2kb green open .apm-agent-configuration ZXdJradmQcG-fbLFmRydKw 1 0 0 0 283b 283b green open .kibana_1 9uZjKicuSPqv5qUSMWes3Q 1 0 7 0 34.5kb 34.5kb yellow open customer S09RAZu5R0yfA8WgHhX3tA 1 1 4 6 9.1kb 9.1kb
删除“ customer ”索引
删除后,所有的索引
1 2 3 green open .kibana_task_manager_1 X9B74aaIS9KHLlPUrYLVWA 1 0 2 0 34.2kb 34.2kb green open .apm-agent-configuration ZXdJradmQcG-fbLFmRydKw 1 0 0 0 283b 283b green open .kibana_1 9uZjKicuSPqv5qUSMWes3Q 1 0 7 0 34.5kb 34.5kb
6)eleasticsearch的批量操作——bulk 语法格式:
1 2 3 4 5 { action: { metadata} } \n{ request body } \n{ action: { metadata} } \n{ request body } \n
这里的批量操作,当发生某一条执行发生失败时,其他的数据仍然能够接着执行,也就是说彼此之间是独立的。
bulk api以此按顺序执行所有的action(动作)。如果一个单个的动作因任何原因失败,它将继续处理它后面剩余的动作。当bulk api返回时,它将提供每个动作的状态(与发送的顺序相同),所以您可以检查是否一个指定的动作是否失败了。
postman不支持下面的数据格式,所以以下将在Kibana中进行测试
实例1: 执行多条数据
1 2 3 4 5 POST customer/external/_bulk { "index" : { "_id" : "1" } } { "name" : "John Doe" } { "index" : { "_id" : "2" } } { "name" : "John Doe" }
执行结果
实例2:对于整个索引执行批量操作
1 2 3 4 5 6 7 8 POST /_bulk { "delete" : { "_index" : "website" , "_type" : "blog" , "_id" : "123" } } { "create" : { "_index" : "website" , "_type" : "blog" , "_id" : "123" } } { "title" : "my first blog post" } { "index" : { "_index" : "website" , "_type" : "blog" } } { "title" : "my second blog post" } { "update" : { "_index" : "website" , "_type" : "blog" , "_id" : "123" } } { "doc" : { "title" : "my updated blog post" } }
运行结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 #! Deprecation: [ types removal] Specifying types in bulk requests is deprecated. { "took" : 344 , "errors" : false , "items" : [ { "delete" : { "_index" : "website" , "_type" : "blog" , "_id" : "123" , "_version" : 1 , "result" : "not_found" , "_shards" : { "total" : 2 , "successful" : 1 , "failed" : 0 } , "_seq_no" : 0 , "_primary_term" : 1 , "status" : 404 } } , { "create" : { "_index" : "website" , "_type" : "blog" , "_id" : "123" , "_version" : 2 , "result" : "created" , "_shards" : { "total" : 2 , "successful" : 1 , "failed" : 0 } , "_seq_no" : 1 , "_primary_term" : 1 , "status" : 201 } } , { "index" : { "_index" : "website" , "_type" : "blog" , "_id" : "BMv_2H8BWhzCIFNne3Q7" , "_version" : 1 , "result" : "created" , "_shards" : { "total" : 2 , "successful" : 1 , "failed" : 0 } , "_seq_no" : 2 , "_primary_term" : 1 , "status" : 201 } } , { "update" : { "_index" : "website" , "_type" : "blog" , "_id" : "123" , "_version" : 3 , "result" : "updated" , "_shards" : { "total" : 2 , "successful" : 1 , "failed" : 0 } , "_seq_no" : 3 , "_primary_term" : 1 , "status" : 200 } } ] }
7)样本测试数据 准备了一份顾客银行账户信息的虚构的JSON文档样本。每个文档都有下列的schema(模式)。
1 2 3 4 5 6 7 8 9 10 11 12 13 { "account_number" : 1 , "balance" : 39225 , "firstname" : "Amber" , "lastname" : "Duke" , "age" : 32 , "gender" : "M" , "address" : "880 Holmes Lane" , "employer" : "Pyrami" , "email" : "amberduke@pyrami.com" , "city" : "Brogan" , "state" : "IL" }
https://github.com/zsxfa/gulimall/blob/main/es%E7%9A%84%E6%B5%8B%E8%AF%95%E6%95%B0%E6%8D%AE.json ,导入测试数据,
POST bank/account/_bulk
3、检索 1)search Api ES支持两种基本方式检索;
通过REST request uri 发送搜索参数 (uri +检索参数);
通过REST request body 来发送它们(uri+请求体);
信息检索
1 2 3 4 5 6 7 8 GET /bank/_search { "query" : { "match_all" : { } } , "sort" : [ { "account_number" : "asc" } , { "balance" : "desc" } ] }
HTTP客户端工具(POSTMAN),get请求不能够携带请求体,我们变为 post 也是一样的我们POST 一个JSON 风格的查询请求体到_search API。 需要了解,一旦搜索的结果被返回,Elasticsearch 就完成了这次请求,并且不会维护任何服务端的资源或者结果的cursor(游标)
(1)只有6条数据,这是因为存在分页查询;
使用from
和size
可以指定查询
1 2 3 4 5 6 7 8 9 10 GET /bank/_search { "query": { "match_all": {} }, "sort": [ { "account_number": "asc" }, {"balance":"desc"} ], "from": 20, "size": 10 }
(2)详细的字段信息,参照: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-search.html
The response also provides the following information about the search request:
took
– how long it took Elasticsearch to run the query, in milliseconds
timed_out
– whether or not the search request timed out
_shards
– how many shards were searched and a breakdown of how many shards succeeded, failed, or were skipped.
max_score
– the score of the most relevant document found
hits.total.value
- how many matching documents were found
hits.sort
- the document’s sort position (when not sorting by relevance score)
hits._score
- the document’s relevance score (not applicable when using match_all
)
2)Query DSL (1)基本语法格式 Elasticsearch提供了一个可以执行查询的Json风格的DSL。这个被称为Query DSL,该查询语言非常全面。
一个查询语句的典型结构
1 2 3 4 QUERY_NAME: { ARGUMENT: VALUE, ARGUMENT: VALUE, ... }
如果针对于某个字段,那么它的结构如下:
1 2 3 4 5 6 7 8 { QUERY_NAME: { FIELD_NAME: { ARGUMENT: VALUE, ARGUMENT: VALUE, ... } } }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 GET bank/_search { "query" : { "match_all" : { } } , "from" : 0 , "size" : 5 , "sort" : [ { "account_number" : { "order" : "desc" } } ] }
query定义如何查询;
match_all查询类型【代表查询所有的所有】,es中可以在query中组合非常多的查询类型完成复杂查询;
除了query参数之外,我们可也传递其他的参数以改变查询结果,如sort,size;
from+size限定,完成分页功能;
sort排序,多字段排序,会在前序字段相等时后续字段内部排序,否则以前序为准;
(2)返回部分字段 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 GET bank/_search { "query" : { "match_all" : { } } , "from" : 0 , "size" : 5 , "sort" : [ { "account_number" : { "order" : "desc" } } ] , "_source" : [ "balance" , "firstname" ] }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 { "took" : 6 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1000 , "relation" : "eq" } , "max_score" : null , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "999" , "_score" : null , "_source" : { "firstname" : "Dorothy" , "balance" : 6087 } , "sort" : [ 999 ] } , { "_index" : "bank" , "_type" : "account" , "_id" : "998" , "_score" : null , "_source" : { "firstname" : "Letha" , "balance" : 16869 } , "sort" : [ 998 ] } , { "_index" : "bank" , "_type" : "account" , "_id" : "997" , "_score" : null , "_source" : { "firstname" : "Combs" , "balance" : 25311 } , "sort" : [ 997 ] } , { "_index" : "bank" , "_type" : "account" , "_id" : "996" , "_score" : null , "_source" : { "firstname" : "Andrews" , "balance" : 17541 } , "sort" : [ 996 ] } , { "_index" : "bank" , "_type" : "account" , "_id" : "995" , "_score" : null , "_source" : { "firstname" : "Phelps" , "balance" : 21153 } , "sort" : [ 995 ] } ] } }
(3)match匹配查询
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "match" : { "account_number" : "20" } } }
match返回account_number=20的数据。上面匹配的20也可以不带引号
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 6 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1 , "relation" : "eq" } , "max_score" : 1.0 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "20" , "_score" : 1.0 , "_source" : { "account_number" : 20 , "balance" : 16418 , "firstname" : "Elinor" , "lastname" : "Ratliff" , "age" : 36 , "gender" : "M" , "address" : "282 Kings Place" , "employer" : "Scentric" , "email" : "elinorratliff@scentric.com" , "city" : "Ribera" , "state" : "WA" } } ] } }
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "match" : { "address" : "kings" } } }
全文检索,最终会按照评分进行排序,会对检索条件进行分词匹配。
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 { "took" : 3 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 2 , "relation" : "eq" } , "max_score" : 5.990829 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "20" , "_score" : 5.990829 , "_source" : { "account_number" : 20 , "balance" : 16418 , "firstname" : "Elinor" , "lastname" : "Ratliff" , "age" : 36 , "gender" : "M" , "address" : "282 Kings Place" , "employer" : "Scentric" , "email" : "elinorratliff@scentric.com" , "city" : "Ribera" , "state" : "WA" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "722" , "_score" : 5.990829 , "_source" : { "account_number" : 722 , "balance" : 27256 , "firstname" : "Roberts" , "lastname" : "Beasley" , "age" : 34 , "gender" : "F" , "address" : "305 Kings Hwy" , "employer" : "Quintity" , "email" : "robertsbeasley@quintity.com" , "city" : "Hayden" , "state" : "PA" } } ] } }
(4) match_phrase [短句匹配] 将需要匹配的值当成一整个单词(不分词)进行检索
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "match_phrase" : { "address" : "mill road" } } }
查处address中包含mill_road的所有记录,并给出相关性得分
查看结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 0 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1 , "relation" : "eq" } , "max_score" : 8.926605 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 8.926605 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } ] } }
match_phrase和Match的区别,观察如下实例:
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "match_phrase" : { "address" : "990 Mill" } } }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 0 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1 , "relation" : "eq" } , "max_score" : 10.806405 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 10.806405 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } ] } }
使用match的keyword
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "match" : { "address.keyword" : "990 Mill" } } }
查询结果,一条也未匹配到
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 { "took" : 0 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 0 , "relation" : "eq" } , "max_score" : null , "hits" : [ ] } }
修改匹配条件为“990 Mill Road”
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "match" : { "address.keyword" : "990 Mill Road" } } }
查询出一条数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1 , "relation" : "eq" } , "max_score" : 6.5032897 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 6.5032897 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } ] } }
文本字段的匹配,使用keyword ,匹配的条件就是要显示字段的全部值,要进行精确匹配的。
match_phrase是做短语匹配,只要文本中包含匹配条件,就能匹配到。
(5)multi_math【多字段匹配】 1 2 3 4 5 6 7 8 9 10 11 12 GET bank/_search { "query" : { "multi_match" : { "query" : "mill" , "fields" : [ "state" , "address" ] } } }
state或者address中包含mill,并且在查询过程中,会对于查询条件进行分词。
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 { "took" : 2 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 4 , "relation" : "eq" } , "max_score" : 5.4032025 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 5.4032025 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "136" , "_score" : 5.4032025 , "_source" : { "account_number" : 136 , "balance" : 45801 , "firstname" : "Winnie" , "lastname" : "Holland" , "age" : 38 , "gender" : "M" , "address" : "198 Mill Lane" , "employer" : "Neteria" , "email" : "winnieholland@neteria.com" , "city" : "Urie" , "state" : "IL" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "345" , "_score" : 5.4032025 , "_source" : { "account_number" : 345 , "balance" : 9812 , "firstname" : "Parker" , "lastname" : "Hines" , "age" : 38 , "gender" : "M" , "address" : "715 Mill Avenue" , "employer" : "Baluba" , "email" : "parkerhines@baluba.com" , "city" : "Blackgum" , "state" : "KY" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "472" , "_score" : 5.4032025 , "_source" : { "account_number" : 472 , "balance" : 25571 , "firstname" : "Lee" , "lastname" : "Long" , "age" : 32 , "gender" : "F" , "address" : "288 Mill Street" , "employer" : "Comverges" , "email" : "leelong@comverges.com" , "city" : "Movico" , "state" : "MT" } } ] } }
(6)bool用来做复合查询 复合语句可以合并,任何其他查询语句,包括符合语句。这也就意味着,复合语句之间 可以互相嵌套,可以表达非常复杂的逻辑。
must:必须达到must所列举的所有条件
1 2 3 4 5 6 7 8 9 10 11 GET bank/_search { "query" : { "bool" : { "must" : [ { "match" : { "address" : "mill" } } , { "match" : { "gender" : "M" } } ] } } }
must_not,必须不匹配must_not所列举的所有条件。
should,应该满足should所列举的条件。
实例:查询gender=m,并且address=mill的数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 GET bank/_search { "query" : { "bool" : { "must" : [ { "match" : { "gender" : "M" } } , { "match" : { "address" : "mill" } } ] } } }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 3 , "relation" : "eq" } , "max_score" : 6.0824604 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 6.0824604 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "136" , "_score" : 6.0824604 , "_source" : { "account_number" : 136 , "balance" : 45801 , "firstname" : "Winnie" , "lastname" : "Holland" , "age" : 38 , "gender" : "M" , "address" : "198 Mill Lane" , "employer" : "Neteria" , "email" : "winnieholland@neteria.com" , "city" : "Urie" , "state" : "IL" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "345" , "_score" : 6.0824604 , "_source" : { "account_number" : 345 , "balance" : 9812 , "firstname" : "Parker" , "lastname" : "Hines" , "age" : 38 , "gender" : "M" , "address" : "715 Mill Avenue" , "employer" : "Baluba" , "email" : "parkerhines@baluba.com" , "city" : "Blackgum" , "state" : "KY" } } ] } }
must_not:必须不是指定的情况
实例:查询gender=m,并且address=mill的数据,但是age不等于38的
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 GET bank/_search { "query" : { "bool" : { "must" : [ { "match" : { "address" : "mill" } } , { "match" : { "gender" : "M" } } ] , "must_not" : [ { "match" : { "age" : "38" } } ] } } }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1 , "relation" : "eq" } , "max_score" : 6.0824604 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 6.0824604 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } ] } }
should:应该达到should列举的条件,如果到达会增加相关文档的评分,并不会改变查询的结果。如果query中只有should且只有一种匹配规则,那么should的条件就会被作为默认匹配条件二区改变查询结果。
实例:匹配lastName应该等于Wallace的数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 GET bank/_search { "query" : { "bool" : { "must" : [ { "match" : { "address" : "mill" } } , { "match" : { "gender" : "M" } } ] , "must_not" : [ { "match" : { "age" : "18" } } ] , "should" : [ { "match" : { "lastname" : "Wallace" } } ] } } }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 3 , "relation" : "eq" } , "max_score" : 12.585751 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 12.585751 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "136" , "_score" : 6.0824604 , "_source" : { "account_number" : 136 , "balance" : 45801 , "firstname" : "Winnie" , "lastname" : "Holland" , "age" : 38 , "gender" : "M" , "address" : "198 Mill Lane" , "employer" : "Neteria" , "email" : "winnieholland@neteria.com" , "city" : "Urie" , "state" : "IL" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "345" , "_score" : 6.0824604 , "_source" : { "account_number" : 345 , "balance" : 9812 , "firstname" : "Parker" , "lastname" : "Hines" , "age" : 38 , "gender" : "M" , "address" : "715 Mill Avenue" , "employer" : "Baluba" , "email" : "parkerhines@baluba.com" , "city" : "Blackgum" , "state" : "KY" } } ] } }
能够看到相关度越高,得分也越高。
(7)Filter【结果过滤】 并不是所有的查询都需要产生分数,特别是哪些仅用于filtering过滤的文档。为了不计算分数,elasticsearch会自动检查场景并且优化查询的执行。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 GET bank/_search { "query" : { "bool" : { "must" : [ { "match" : { "address" : "mill" } } ] , "filter" : { "range" : { "balance" : { "gte" : "10000" , "lte" : "20000" } } } } } }
这里先是查询所有匹配address=mill的文档,然后再根据10000<=balance<=20000进行过滤查询结果
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1 , "relation" : "eq" } , "max_score" : 5.4032025 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "970" , "_score" : 5.4032025 , "_source" : { "account_number" : 970 , "balance" : 19648 , "firstname" : "Forbes" , "lastname" : "Wallace" , "age" : 28 , "gender" : "M" , "address" : "990 Mill Road" , "employer" : "Pheast" , "email" : "forbeswallace@pheast.com" , "city" : "Lopezo" , "state" : "AK" } } ] } }
Each must
, should
, and must_not
element in a Boolean query is referred to as a query clause. How well a document meets the criteria in each must
or should
clause contributes to the document’s relevance score . The higher the score, the better the document matches your search criteria. By default, Elasticsearch returns documents ranked by these relevance scores.
在boolean查询中,must
, should
和must_not
元素都被称为查询子句 。 文档是否符合每个“must”或“should”子句中的标准,决定了文档的“相关性得分”。 得分越高,文档越符合您的搜索条件。 默认情况下,Elasticsearch返回根据这些相关性得分排序的文档。
The criteria in a must_not
clause is treated as a filter . It affects whether or not the document is included in the results, but does not contribute to how documents are scored. You can also explicitly specify arbitrary filters to include or exclude documents based on structured data.
“must_not”子句中的条件被视为“过滤器”。
它影响文档是否包含在结果中, 但不影响文档的评分方式。 还可以显式地指定任意过滤器来包含或排除基于结构化数据的文档。
filter在使用过程中,并不会计算相关性得分:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 GET bank/_search { "query" : { "bool" : { "filter" : { "range" : { "balance" : { "gte" : "10000" , "lte" : "20000" } } } } } }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 213 , "relation" : "eq" } , "max_score" : 0.0 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "20" , "_score" : 0.0 , "_source" : { "account_number" : 20 , "balance" : 16418 , "firstname" : "Elinor" , "lastname" : "Ratliff" , "age" : 36 , "gender" : "M" , "address" : "282 Kings Place" , "employer" : "Scentric" , "email" : "elinorratliff@scentric.com" , "city" : "Ribera" , "state" : "WA" } } , { "_index" : "bank" , "_type" : "account" , "_id" : "37" , "_score" : 0.0 , "_source" : { "account_number" : 37 , "balance" : 18612 , "firstname" : "Mcgee" , "lastname" : "Mooney" , "age" : 39 , "gender" : "M" , "address" : "826 Fillmore Place" , "employer" : "Reversus" , "email" : "mcgeemooney@reversus.com" , "city" : "Tooleville" , "state" : "OK" } } , ......
能看到所有文档的 “_score” : 0.0。
(8)term 和match一样。匹配某个属性的值。全文检索字段用match,其他非text字段匹配用term。
Avoid using the term
query for text
fields.
避免对文本字段使用“term”查询
By default, Elasticsearch changes the values of text
fields as part of analysis . This can make finding exact matches for text
field values difficult.
默认情况下,Elasticsearch作为analysis 的一部分更改’ text ‘字段的值。这使得为“text”字段值寻找精确匹配变得困难。
To search text
field values, use the match.
要搜索“text”字段值,请使用匹配。
https://www.elastic.co/guide/en/elasticsearch/reference/7.6/query-dsl-term-query.html
使用term匹配查询
1 2 3 4 5 6 7 8 GET bank/_search { "query" : { "term" : { "address" : "mill Road" } } }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 { "took" : 0 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 0 , "relation" : "eq" } , "max_score" : null , "hits" : [ ] } }
一条也没有匹配到
而更换为match匹配时,能够匹配到32个文档
也就是说,全文检索字段用match,其他非text字段匹配用term 。
(9)Aggregation(执行聚合) 聚合提供了从数据中分组和提取数据的能力。最简单的聚合方法大致等于SQL Group by和SQL聚合函数。在elasticsearch中,执行搜索返回this(命中结果),并且同时返回聚合结果,把以响应中的所有hits(命中结果)分隔开的能力。这是非常强大且有效的,你可以执行查询和多个聚合,并且在一次使用中得到各自的(任何一个的)返回结果,使用一次简洁和简化的API啦避免网络往返。
“size”:0
size:0不显示搜索数据 aggs:执行聚合。聚合语法如下:
1 2 3 4 5 "aggs" : { "aggs_name这次聚合的名字,方便展示在结果集中" : { "AGG_TYPE聚合的类型(avg,term,terms)" : { } } } ,
搜索address中包含mill的所有人的年龄分布以及平均年龄,但不显示这些人的详情
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 GET bank/_search { "query" : { "match" : { "address" : "Mill" } } , "aggs" : { "ageAgg" : { "terms" : { "field" : "age" , "size" : 10 #假设年龄有100 种可能,只取出10 个 } } , "ageAvg" : { "avg" : { "field" : "age" } } , "balanceAvg" : { "avg" : { "field" : "balance" } } } , "size" : 0 }
查询结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 4 , "relation" : "eq" } , "max_score" : null , "hits" : [ ] } , "aggregations" : { "ageAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : 38 , "doc_count" : 2 } , { "key" : 28 , "doc_count" : 1 } , { "key" : 32 , "doc_count" : 1 } ] } , "ageAvg" : { "value" : 34.0 } , "balanceAvg" : { "value" : 25208.0 } } }
复杂: 按照年龄聚合,并且求这些年龄段的这些人的平均薪资
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 GET bank/_search { "query" : { "match_all" : { } } , "aggs" : { "ageAgg" : { "terms" : { "field" : "age" , "size" : 100 } , "aggs" : { "ageAvg" : { "avg" : { "field" : "balance" } } } } } , "size" : 0 }
输出结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1000 , "relation" : "eq" } , "max_score" : null , "hits" : [ ] } , "aggregations" : { "ageAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : 31 , "doc_count" : 61 , "ageAvg" : { "value" : 28312.918032786885 } } , { "key" : 39 , "doc_count" : 60 , "ageAvg" : { "value" : 25269.583333333332 } } , { "key" : 26 , "doc_count" : 59 , "ageAvg" : { "value" : 23194.813559322032 } } , { "key" : 32 , "doc_count" : 52 , "ageAvg" : { "value" : 23951.346153846152 } } , { "key" : 35 , "doc_count" : 52 , "ageAvg" : { "value" : 22136.69230769231 } } , { "key" : 36 , "doc_count" : 52 , "ageAvg" : { "value" : 22174.71153846154 } } , { "key" : 22 , "doc_count" : 51 , "ageAvg" : { "value" : 24731.07843137255 } } , { "key" : 28 , "doc_count" : 51 , "ageAvg" : { "value" : 28273.882352941175 } } , { "key" : 33 , "doc_count" : 50 , "ageAvg" : { "value" : 25093.94 } } , { "key" : 34 , "doc_count" : 49 , "ageAvg" : { "value" : 26809.95918367347 } } , { "key" : 30 , "doc_count" : 47 , "ageAvg" : { "value" : 22841.106382978724 } } , { "key" : 21 , "doc_count" : 46 , "ageAvg" : { "value" : 26981.434782608696 } } , { "key" : 40 , "doc_count" : 45 , "ageAvg" : { "value" : 27183.17777777778 } } , { "key" : 20 , "doc_count" : 44 , "ageAvg" : { "value" : 27741.227272727272 } } , { "key" : 23 , "doc_count" : 42 , "ageAvg" : { "value" : 27314.214285714286 } } , { "key" : 24 , "doc_count" : 42 , "ageAvg" : { "value" : 28519.04761904762 } } , { "key" : 25 , "doc_count" : 42 , "ageAvg" : { "value" : 27445.214285714286 } } , { "key" : 37 , "doc_count" : 42 , "ageAvg" : { "value" : 27022.261904761905 } } , { "key" : 27 , "doc_count" : 39 , "ageAvg" : { "value" : 21471.871794871793 } } , { "key" : 38 , "doc_count" : 39 , "ageAvg" : { "value" : 26187.17948717949 } } , { "key" : 29 , "doc_count" : 35 , "ageAvg" : { "value" : 29483.14285714286 } } ] } } }
查出所有年龄分布,并且这些年龄段中M的平均薪资和F的平均薪资以及这个年龄段的总体平均薪资
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 GET bank/_search { "query" : { "match_all" : { } } , "aggs" : { "ageAgg" : { "terms" : { "field" : "age" , "size" : 100 } , "aggs" : { "genderAgg" : { "terms" : { "field" : "gender.keyword" } , "aggs" : { "balanceAvg" : { "avg" : { "field" : "balance" } } } } , "ageBalanceAvg" : { "avg" : { "field" : "balance" } } } } } , "size" : 0 }
输出结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1000 , "relation" : "eq" } , "max_score" : null , "hits" : [ ] } , "aggregations" : { "ageAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : 31 , "doc_count" : 61 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 35 , "balanceAvg" : { "value" : 29565.628571428573 } } , { "key" : "F" , "doc_count" : 26 , "balanceAvg" : { "value" : 26626.576923076922 } } ] } , "ageBalanceAvg" : { "value" : 28312.918032786885 } } , { "key" : 39 , "doc_count" : 60 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 38 , "balanceAvg" : { "value" : 26348.684210526317 } } , { "key" : "M" , "doc_count" : 22 , "balanceAvg" : { "value" : 23405.68181818182 } } ] } , "ageBalanceAvg" : { "value" : 25269.583333333332 } } , { "key" : 26 , "doc_count" : 59 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 32 , "balanceAvg" : { "value" : 25094.78125 } } , { "key" : "F" , "doc_count" : 27 , "balanceAvg" : { "value" : 20943.0 } } ] } , "ageBalanceAvg" : { "value" : 23194.813559322032 } } , { "key" : 32 , "doc_count" : 52 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 28 , "balanceAvg" : { "value" : 22941.964285714286 } } , { "key" : "F" , "doc_count" : 24 , "balanceAvg" : { "value" : 25128.958333333332 } } ] } , "ageBalanceAvg" : { "value" : 23951.346153846152 } } , { "key" : 35 , "doc_count" : 52 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 28 , "balanceAvg" : { "value" : 24226.321428571428 } } , { "key" : "F" , "doc_count" : 24 , "balanceAvg" : { "value" : 19698.791666666668 } } ] } , "ageBalanceAvg" : { "value" : 22136.69230769231 } } , { "key" : 36 , "doc_count" : 52 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 31 , "balanceAvg" : { "value" : 20884.677419354837 } } , { "key" : "F" , "doc_count" : 21 , "balanceAvg" : { "value" : 24079.04761904762 } } ] } , "ageBalanceAvg" : { "value" : 22174.71153846154 } } , { "key" : 22 , "doc_count" : 51 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 27 , "balanceAvg" : { "value" : 22152.74074074074 } } , { "key" : "M" , "doc_count" : 24 , "balanceAvg" : { "value" : 27631.708333333332 } } ] } , "ageBalanceAvg" : { "value" : 24731.07843137255 } } , { "key" : 28 , "doc_count" : 51 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 31 , "balanceAvg" : { "value" : 27076.8064516129 } } , { "key" : "M" , "doc_count" : 20 , "balanceAvg" : { "value" : 30129.35 } } ] } , "ageBalanceAvg" : { "value" : 28273.882352941175 } } , { "key" : 33 , "doc_count" : 50 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 26 , "balanceAvg" : { "value" : 26437.615384615383 } } , { "key" : "M" , "doc_count" : 24 , "balanceAvg" : { "value" : 23638.291666666668 } } ] } , "ageBalanceAvg" : { "value" : 25093.94 } } , { "key" : 34 , "doc_count" : 49 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 30 , "balanceAvg" : { "value" : 26039.166666666668 } } , { "key" : "M" , "doc_count" : 19 , "balanceAvg" : { "value" : 28027.0 } } ] } , "ageBalanceAvg" : { "value" : 26809.95918367347 } } , { "key" : 30 , "doc_count" : 47 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 25 , "balanceAvg" : { "value" : 25316.16 } } , { "key" : "M" , "doc_count" : 22 , "balanceAvg" : { "value" : 20028.545454545456 } } ] } , "ageBalanceAvg" : { "value" : 22841.106382978724 } } , { "key" : 21 , "doc_count" : 46 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 24 , "balanceAvg" : { "value" : 28210.916666666668 } } , { "key" : "M" , "doc_count" : 22 , "balanceAvg" : { "value" : 25640.18181818182 } } ] } , "ageBalanceAvg" : { "value" : 26981.434782608696 } } , { "key" : 40 , "doc_count" : 45 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 24 , "balanceAvg" : { "value" : 26474.958333333332 } } , { "key" : "F" , "doc_count" : 21 , "balanceAvg" : { "value" : 27992.571428571428 } } ] } , "ageBalanceAvg" : { "value" : 27183.17777777778 } } , { "key" : 20 , "doc_count" : 44 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 27 , "balanceAvg" : { "value" : 29047.444444444445 } } , { "key" : "F" , "doc_count" : 17 , "balanceAvg" : { "value" : 25666.647058823528 } } ] } , "ageBalanceAvg" : { "value" : 27741.227272727272 } } , { "key" : 23 , "doc_count" : 42 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 24 , "balanceAvg" : { "value" : 27730.75 } } , { "key" : "F" , "doc_count" : 18 , "balanceAvg" : { "value" : 26758.833333333332 } } ] } , "ageBalanceAvg" : { "value" : 27314.214285714286 } } , { "key" : 24 , "doc_count" : 42 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 23 , "balanceAvg" : { "value" : 29414.521739130436 } } , { "key" : "M" , "doc_count" : 19 , "balanceAvg" : { "value" : 27435.052631578947 } } ] } , "ageBalanceAvg" : { "value" : 28519.04761904762 } } , { "key" : 25 , "doc_count" : 42 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 23 , "balanceAvg" : { "value" : 29336.08695652174 } } , { "key" : "F" , "doc_count" : 19 , "balanceAvg" : { "value" : 25156.263157894737 } } ] } , "ageBalanceAvg" : { "value" : 27445.214285714286 } } , { "key" : 37 , "doc_count" : 42 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 23 , "balanceAvg" : { "value" : 25015.739130434784 } } , { "key" : "F" , "doc_count" : 19 , "balanceAvg" : { "value" : 29451.21052631579 } } ] } , "ageBalanceAvg" : { "value" : 27022.261904761905 } } , { "key" : 27 , "doc_count" : 39 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 21 , "balanceAvg" : { "value" : 21618.85714285714 } } , { "key" : "M" , "doc_count" : 18 , "balanceAvg" : { "value" : 21300.38888888889 } } ] } , "ageBalanceAvg" : { "value" : 21471.871794871793 } } , { "key" : 38 , "doc_count" : 39 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "F" , "doc_count" : 20 , "balanceAvg" : { "value" : 27931.65 } } , { "key" : "M" , "doc_count" : 19 , "balanceAvg" : { "value" : 24350.894736842107 } } ] } , "ageBalanceAvg" : { "value" : 26187.17948717949 } } , { "key" : 29 , "doc_count" : 35 , "genderAgg" : { "doc_count_error_upper_bound" : 0 , "sum_other_doc_count" : 0 , "buckets" : [ { "key" : "M" , "doc_count" : 23 , "balanceAvg" : { "value" : 29943.17391304348 } } , { "key" : "F" , "doc_count" : 12 , "balanceAvg" : { "value" : 28601.416666666668 } } ] } , "ageBalanceAvg" : { "value" : 29483.14285714286 } } ] } } }
3)Mapping (1)字段类型 核心类型:
字符串(string): text,keyword
数字类型(Numeric):long,integer,short,byte,double,float,half_float,scaled_float
日期类型(Date): date
布尔类型(Boolean): boolean
二进制类型(binary): binary
复合类型:
数组类型(Array): Array支持不针对特定的类型
对象类型(Object): object用于单JSON对象
嵌套类型(Nested): nested用户JSON对象数组
地理类型(Geo)
地理坐标(Geo-points): geo_point用于描述 经纬度坐标
地理图形(Geo-Shape): geo_shape用户描述复杂形状,如多边形
特定类型:
IP类型:ip用于描述ipv4和ipv6地址
补全类型(Completion):completion提供自动完成提示
令牌计数类型(Token count):token_count用于统计字符串种的词条数量
附件类型(attachment):参考mapper-attachements插件,支持将附件如Microsoft Office格式,Open Document格式,ePub,HTML等等索引为attachment数据类型。
抽取类型(Percolator):接受特定领域查询语言(query-dsl)的查询
多字段:
通常用于为不同的方法索引同一个字段。例如,string字段可以映射为一个text字段用于全文检索,同样可以映射为一个keyword字段用于排序和聚合。另外,你可以使用standard analyzer,english analyzer, french analyzer来索引一个text字段
这就是muti-fields的目的,大多数的数据类型通过fields参数来支持muti-fields。
(2)映射 Mapping(映射) Maping是用来定义一个文档(document),以及它所包含的属性(field)是如何存储和索引的。比如:使用maping来定义:
(3)新版本改变 ElasticSearch7-去掉type概念
关系型数据库中两个数据表示是独立的,即使他们里面有相同名称的列也不影响使用,但ES中不是这样的。elasticsearch是基于Lucene开发的搜索引擎,而ES中不同type下名称相同的filed最终在Lucene中的处理方式是一样的。
两个不同type下的两个user_name,在ES同一个索引下其实被认为是同一个filed,你必须在两个不同的type中定义相同的filed映射。否则,不同type中的相同字段名称就会在处理中出现冲突的情况,导致Lucene处理效率下降。
去掉type就是为了提高ES处理数据的效率。
Elasticsearch 7.x URL中的type参数为可选。比如,索引一个文档不再要求提供文档类型。
Elasticsearch 8.x 不再支持URL中的type参数。
解决: 将索引从多类型迁移到单类型,每种类型文档一个独立索引
将已存在的索引下的类型数据,全部迁移到指定位置即可。详见数据迁移
Elasticsearch 7.x
Specifying types in requests is deprecated. For instance, indexing a document no longer requires a document type
. The new index APIs are PUT {index}/_doc/{id}
in case of explicit ids and POST {index}/_doc
for auto-generated ids. Note that in 7.0, _doc
is a permanent part of the path, and represents the endpoint name rather than the document type.
The include_type_name
parameter in the index creation, index template, and mapping APIs will default to false
. Setting the parameter at all will result in a deprecation warning.
The _default_
mapping type is removed.
Elasticsearch 8.x
Specifying types in requests is no longer supported.
The include_type_name
parameter is removed.
创建映射 创建索引并指定映射
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 PUT /my_index { "mappings" : { "properties" : { "age" : { "type" : "integer" } , "email" : { "type" : "keyword" } , "name" : { "type" : "text" } } } }
输出:
1 2 3 4 5 6 { "acknowledged" : true , "shards_acknowledged" : true , "index" : "my_index" }
查看映射
输出结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 { "my_index" : { "aliases" : { } , "mappings" : { "properties" : { "age" : { "type" : "integer" } , "email" : { "type" : "keyword" } , "name" : { "type" : "text" } } } , "settings" : { "index" : { "creation_date" : "1648691594218" , "number_of_shards" : "1" , "number_of_replicas" : "1" , "uuid" : "2gowX9taSjmvYBz-OaDILQ" , "version" : { "created" : "7060299" } , "provided_name" : "my_index" } } } }
添加新的字段映射 1 2 3 4 5 6 7 8 9 PUT /my_index/_mapping { "properties" : { "employee-id" : { "type" : "keyword" , "index" : false } } }
这里的 “index”: false,表明新增的字段不能被检索,只是一个冗余字段。
更新映射 对于已经存在的字段映射,我们不能更新。更新必须创建新的索引,进行数据迁移。
数据迁移 先创建new_twitter的正确映射。然后使用如下方式进行数据迁移。
1 2 3 4 5 6 7 8 9 POST _reindex [ 固定写法] { "source" : { "index" : "twitter" } , "dest" : { "index" : "new_twitters" } }
将旧索引的type下的数据进行迁移
1 2 3 4 5 6 7 8 9 10 POST _reindex [ 固定写法] { "source" : { "index" : "twitter" , "twitter" : "twitter" } , "dest" : { "index" : "new_twitters" } }
更多详情见: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/docs-reindex.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 GET /bank/_search { "took" : 0 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 1000 , "relation" : "eq" } , "max_score" : 1.0 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "1" , "_score" : 1.0 , "_source" : { "account_number" : 1 , "balance" : 39225 , "firstname" : "Amber" , "lastname" : "Duke" , "age" : 32 , "gender" : "M" , "address" : "880 Holmes Lane" , "employer" : "Pyrami" , "email" : "amberduke@pyrami.com" , "city" : "Brogan" , "state" : "IL" } } , ...
想要将年龄修改为integer
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 PUT /newbank { "mappings" : { "properties" : { "account_number" : { "type" : "long" } , "address" : { "type" : "text" } , "age" : { "type" : "integer" } , "balance" : { "type" : "long" } , "city" : { "type" : "keyword" } , "email" : { "type" : "keyword" } , "employer" : { "type" : "keyword" } , "firstname" : { "type" : "text" } , "gender" : { "type" : "keyword" } , "lastname" : { "type" : "text" , "fields" : { "keyword" : { "type" : "keyword" , "ignore_above" : 256 } } } , "state" : { "type" : "keyword" } } } }
查看“newbank”的映射:
能够看到age的映射类型被修改为了integer.
将bank中的数据迁移到newbank中
1 2 3 4 5 6 7 8 9 10 POST _reindex { "source" : { "index" : "bank" , "type" : "account" } , "dest" : { "index" : "newbank" } }
运行输出:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #! Deprecation: [ types removal] Specifying types in reindex requests is deprecated. { "took" : 768 , "timed_out" : false , "total" : 1000 , "updated" : 0 , "created" : 1000 , "deleted" : 0 , "batches" : 1 , "version_conflicts" : 0 , "noops" : 0 , "retries" : { "bulk" : 0 , "search" : 0 } , "throttled_millis" : 0 , "requests_per_second" : -1.0 , "throttled_until_millis" : 0 , "failures" : [ ] }
查看newbank中的数据
4)分词 一个tokenizer(分词器)接收一个字符流,将之分割为独立的tokens(词元,通常是独立的单词),然后输出tokens流。
例如:whitespace tokenizer遇到空白字符时分割文本。它会将文本“Quick brown fox!”分割为[Quick,brown,fox!]。
该tokenizer(分词器)还负责记录各个terms(词条)的顺序或position位置(用于phrase短语和word proximity词近邻查询),以及term(词条)所代表的原始word(单词)的start(起始)和end(结束)的character offsets(字符串偏移量)(用于高亮显示搜索的内容)。
elasticsearch提供了很多内置的分词器,可以用来构建custom analyzers(自定义分词器)。
关于分词器: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/analysis.html
1 2 3 4 5 POST _analyze { "analyzer" : "standard" , "text" : "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." }
执行结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 { "tokens" : [ { "token" : "the" , "start_offset" : 0 , "end_offset" : 3 , "type" : "<ALPHANUM>" , "position" : 0 } , { "token" : "2" , "start_offset" : 4 , "end_offset" : 5 , "type" : "<NUM>" , "position" : 1 } , { "token" : "quick" , "start_offset" : 6 , "end_offset" : 11 , "type" : "<ALPHANUM>" , "position" : 2 } , { "token" : "brown" , "start_offset" : 12 , "end_offset" : 17 , "type" : "<ALPHANUM>" , "position" : 3 } , { "token" : "foxes" , "start_offset" : 18 , "end_offset" : 23 , "type" : "<ALPHANUM>" , "position" : 4 } , { "token" : "jumped" , "start_offset" : 24 , "end_offset" : 30 , "type" : "<ALPHANUM>" , "position" : 5 } , { "token" : "over" , "start_offset" : 31 , "end_offset" : 35 , "type" : "<ALPHANUM>" , "position" : 6 } , { "token" : "the" , "start_offset" : 36 , "end_offset" : 39 , "type" : "<ALPHANUM>" , "position" : 7 } , { "token" : "lazy" , "start_offset" : 40 , "end_offset" : 44 , "type" : "<ALPHANUM>" , "position" : 8 } , { "token" : "dog's" , "start_offset" : 45 , "end_offset" : 50 , "type" : "<ALPHANUM>" , "position" : 9 } , { "token" : "bone" , "start_offset" : 51 , "end_offset" : 55 , "type" : "<ALPHANUM>" , "position" : 10 } ] }
1)安装ik分词器
所有的语言分词,默认使用的都是“Standard Analyzer”,但是这些分词器针对于中文的分词,并不友好。为此需要安装中文的分词器。
注意:不能用默认elasticsearch-plugin install xxx.zip 进行自动安装https://github.com/medcl/elasticsearch-analysis-ik/releases 对应es版本安装
在前面安装的elasticsearch时,我们已经将elasticsearch容器的“/usr/share/elasticsearch/plugins”目录,映射到宿主机的“ /mydata/elasticsearch/plugins”目录下,所以比较方便的做法就是下载“/elasticsearch-analysis-ik-7.6.2.zip”文件,然后解压到该文件夹下即可。安装完毕后,需要重启elasticsearch容器。
如果不嫌麻烦,还可以采用如下的方式。
a、查看elasticsearch版本号: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@hadoop-104 ~]# curl http://localhost:9200 { "name" : "0adeb7852e00", "cluster_name" : "elasticsearch", "cluster_uuid" : "9gglpP0HTfyOTRAaSe2rIg", "version" : { "number" : "7.6.2", #版本号为7.6.2 "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.4.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } [root@hadoop-104 ~]#
b、进入es容器内部plugin目录
docker exec -it 容器id /bin/bash
1 2 [root@hadoop-104 ~]# docker exec -it elasticsearch /bin/bash [root@0adeb7852e00 elasticsearch]#
1 2 3 4 [root@0adeb7852e00 elasticsearch]# pwd /usr/share/elasticsearch # 下载ik7.6.2 [root@0adeb7852e00 elasticsearch]# wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.6.2/elasticsearch-analysis-ik-7.6.2.zip
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@0adeb7852e00 elasticsearch]# unzip elasticsearch-analysis-ik-7.6.2.zip -d ik Archive: elasticsearch-analysis-ik-7.6.2.zip creating: ik/config/ inflating: ik/config/main.dic inflating: ik/config/quantifier.dic inflating: ik/config/extra_single_word_full.dic inflating: ik/config/IKAnalyzer.cfg.xml inflating: ik/config/surname.dic inflating: ik/config/suffix.dic inflating: ik/config/stopword.dic inflating: ik/config/extra_main.dic inflating: ik/config/extra_stopword.dic inflating: ik/config/preposition.dic inflating: ik/config/extra_single_word_low_freq.dic inflating: ik/config/extra_single_word.dic inflating: ik/elasticsearch-analysis-ik-7.6.2.jar inflating: ik/httpclient-4.5.2.jar inflating: ik/httpcore-4.4.4.jar inflating: ik/commons-logging-1.2.jar inflating: ik/commons-codec-1.9.jar inflating: ik/plugin-descriptor.properties inflating: ik/plugin-security.policy [root@0adeb7852e00 elasticsearch]# chmod -R 777 ik/ # 移动到plugins目录下 [root@0adeb7852e00 elasticsearch]# mv ik plugins/
1 [root@0adeb7852e00 elasticsearch]# rm -rf elasticsearch-analysis-ik-7.6.2.zip
确认是否安装好了分词器,进入到bin目录中执行
1 elasticsearch-plugin list
2)测试分词器 使用默认
1 2 3 4 GET my_index/_analyze { "text" : "我是中国人" }
请观察执行结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 { "tokens" : [ { "token" : "我" , "start_offset" : 0 , "end_offset" : 1 , "type" : "<IDEOGRAPHIC>" , "position" : 0 } , { "token" : "是" , "start_offset" : 1 , "end_offset" : 2 , "type" : "<IDEOGRAPHIC>" , "position" : 1 } , { "token" : "中" , "start_offset" : 2 , "end_offset" : 3 , "type" : "<IDEOGRAPHIC>" , "position" : 2 } , { "token" : "国" , "start_offset" : 3 , "end_offset" : 4 , "type" : "<IDEOGRAPHIC>" , "position" : 3 } , { "token" : "人" , "start_offset" : 4 , "end_offset" : 5 , "type" : "<IDEOGRAPHIC>" , "position" : 4 } ] }
使用ik_smart分词器
1 2 3 4 5 GET my_index/_analyze { "analyzer" : "ik_smart" , "text" : "我是中国人" }
输出结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 { "tokens" : [ { "token" : "我" , "start_offset" : 0 , "end_offset" : 1 , "type" : "CN_CHAR" , "position" : 0 } , { "token" : "是" , "start_offset" : 1 , "end_offset" : 2 , "type" : "CN_CHAR" , "position" : 1 } , { "token" : "中国人" , "start_offset" : 2 , "end_offset" : 5 , "type" : "CN_WORD" , "position" : 2 } ] }
使用ik_max_word分词器
1 2 3 4 5 GET my_index/_analyze { "analyzer" : "ik_max_word" , "text" : "我是中国人" }
输出结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 { "tokens" : [ { "token" : "我" , "start_offset" : 0 , "end_offset" : 1 , "type" : "CN_CHAR" , "position" : 0 } , { "token" : "是" , "start_offset" : 1 , "end_offset" : 2 , "type" : "CN_CHAR" , "position" : 1 } , { "token" : "中国人" , "start_offset" : 2 , "end_offset" : 5 , "type" : "CN_WORD" , "position" : 2 } , { "token" : "中国" , "start_offset" : 2 , "end_offset" : 4 , "type" : "CN_WORD" , "position" : 3 } , { "token" : "国人" , "start_offset" : 3 , "end_offset" : 5 , "type" : "CN_WORD" , "position" : 4 } ] }
3)对ES进行设置 由于之前为Linux分配的内存太小了,所以首先需要对虚拟机内存进行配置,
然后需要将ES的docker镜像删除掉重新设置一个新的
1 2 3 4 5 6 7 8 9 10 11 [root@localhost ~]# docker ps 1e3900cda632 elasticsearch:7.6.2 "/usr/local/bin/dock…" ... [root@localhost ~]# docker stop 1e3 [root@localhost ~]# docker rm 1e3 [root@localhost ~]#docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \ -e "discovery.type=single-node" \ -e ES_JAVA_OPTS="-Xms64m -Xmx512m" \ -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \ -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \ -d elasticsearch:7.6.2
4)自定义词库 首先看第5部分附录的安装Nginx部分
修改/mydata/elasticsearch/plugins/ik/config中的IKAnalyzer.cfg.xml /mydata/elasticsearch/plugins/ik/config
1 2 3 4 5 6 7 8 9 10 11 12 13 <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd" > <properties > <comment > IK Analyzer 扩展配置</comment > <entry key ="ext_dict" > </entry > <entry key ="ext_stopwords" > </entry > <entry key ="remote_ext_dict" > http://#/es/fenci.txt</entry > </properties >
原来的xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd" > <properties > <comment > IK Analyzer 扩展配置</comment > <entry key ="ext_dict" > </entry > <entry key ="ext_stopwords" > </entry > </properties >
修改完成后,需要重启elasticsearch容器,否则修改不生效。
1 [root@localhost config]# docker restart elasticsearch
更新完成后,es只会对于新增的数据用更新分词。历史数据是不会重新分词的。如果想要历史数据重新分词,需要执行:
1 POST my_index/_update_by_query?conflicts=proceed
http://#/es/fenci.txt,这个是nginx上资源的访问路径
在运行下面实例之前,需要安装nginx(安装方法见安装nginx),然后创建“fenci.txt”文件,内容如下:
1 echo "乔碧萝" > /mydata/nginx/html/fenci.txt
测试效果:
1 2 3 4 5 GET my_index/_analyze { "analyzer" : "ik_max_word" , "text" : "乔碧萝殿下" }
输出结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 { "tokens" : [ { "token" : "乔碧萝" , "start_offset" : 0 , "end_offset" : 3 , "type" : "CN_WORD" , "position" : 0 } , { "token" : "殿下" , "start_offset" : 3 , "end_offset" : 5 , "type" : "CN_WORD" , "position" : 1 } ] }
4、elasticsearch-Rest-Client 1)9300: TCP
spring-data-elasticsearch:transport-api.jar;
springboot版本不同,ransport-api.jar不同,不能适配es版本
7.x已经不建议使用,8以后就要废弃
2)9200: HTTP
5、附录:安装Nginx
随便启动一个nginx实例,只是为了复制出配置
1 [root@localhost mydata]# docker run -p80:80 --name nginx -d nginx:1.10
将容器内的配置文件拷贝到/mydata/nginx/conf/ 下
1 2 3 4 5 6 [root@localhost mydata]# docker container cp nginx:/etc/nginx . # 由于拷贝完成后文件会存在nginx文件夹,这里将nginx文件夹的名字改为conf [root@localhost mydata]# mv nginx conf # 再次创建一个nginx文件夹 [root@localhost mydata]# mkdir nginx [root@localhost mydata]# mv conf nginx/
终止原容器:
执行命令删除原容器:
创建新的Nginx,执行以下命令
1 2 3 4 5 docker run -p 80:80 --name nginx \ -v /mydata/nginx/html:/usr/share/nginx/html \ -v /mydata/nginx/logs:/var/log/nginx \ -v /mydata/nginx/conf/:/etc/nginx \ -d nginx:1.10
设置开机启动nginx
1 docker update nginx --restart=always
创建“/mydata/nginx/html/index.html”文件,测试是否能够正常访问
1 echo '<h2>hello nginx!</h2>' >index .html
访问:http://ngix所在主机的IP:80/index.html
SpringBoot整合ElasticSearch 1、导入依赖 这里的版本要和所按照的ELK版本匹配。
1 2 3 4 5 <dependency > <groupId > org.elasticsearch.client</groupId > <artifactId > elasticsearch-rest-high-level-client</artifactId > <version > 7.6.2</version > </dependency >
在spring-boot-dependencies中所依赖的ELK版本位6.8.7
1 <elasticsearch.version> 6 .8 .7 </elasticsearch.version>
需要在项目中将它改为7.6.2
1 2 3 4 <properties > ... <elasticsearch.version > 7.6.2</elasticsearch.version > </properties >
2、编写测试类 1)测试保存数据 https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-document-index.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 @Test public void indexData () throws IOException { IndexRequest indexRequest = new IndexRequest ("users" ); User user = new User (); user.setUserName("张三" ); user.setAge(20 ); user.setGender("男" ); String jsonString = JSON.toJSONString(user); indexRequest.source(jsonString, XContentType.JSON); IndexResponse index = client.index(indexRequest, GulimallElasticSearchConfig.COMMON_OPTIONS); System.out.println(index); }
测试后:
2)测试获取数据 https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-search.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 @Test public void searchData () throws IOException { GetRequest getRequest = new GetRequest ( "users" , "_-2vAHIB0nzmLJLkxKWk" ); GetResponse getResponse = client.get(getRequest, RequestOptions.DEFAULT); System.out.println(getResponse); String index = getResponse.getIndex(); System.out.println(index); String id = getResponse.getId(); System.out.println(id); if (getResponse.isExists()) { long version = getResponse.getVersion(); System.out.println(version); String sourceAsString = getResponse.getSourceAsString(); System.out.println(sourceAsString); Map<String, Object> sourceAsMap = getResponse.getSourceAsMap(); System.out.println(sourceAsMap); byte [] sourceAsBytes = getResponse.getSourceAsBytes(); } else { } }
查询state=”AK”的文档:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "took" : 1 , "timed_out" : false , "_shards" : { "total" : 1 , "successful" : 1 , "skipped" : 0 , "failed" : 0 } , "hits" : { "total" : { "value" : 22 , "relation" : "eq" } , "max_score" : 3.7952394 , "hits" : [ { "_index" : "bank" , "_type" : "account" , "_id" : "210" , "_score" : 3.7952394 , "_source" : { "account_number" : 210 , "balance" : 33946 , "firstname" : "Cherry" , "lastname" : "Carey" , "age" : 24 , "gender" : "M" , "address" : "539 Tiffany Place" , "employer" : "Martgo" , "email" : "cherrycarey@martgo.com" , "city" : "Fairacres" , "state" : "AK" } } , .... ] } }
搜索address中包含mill的所有人的年龄分布以及平均年龄,平均薪资
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 GET bank/_search { "query" : { "match" : { "address" : "Mill" } } , "aggs" : { "ageAgg" : { "terms" : { "field" : "age" , "size" : 10 } } , "ageAvg" : { "avg" : { "field" : "age" } } , "balanceAvg" : { "avg" : { "field" : "balance" } } } }
java实现
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 @Test public void searchData () throws IOException { SearchRequest searchRequest = new SearchRequest (); searchRequest.indices("bank" ); SearchSourceBuilder sourceBuilder = new SearchSourceBuilder (); sourceBuilder.query(QueryBuilders.matchQuery("address" ,"Mill" )); TermsAggregationBuilder ageAgg=AggregationBuilders.terms("ageAgg" ).field("age" ).size(10 ); sourceBuilder.aggregation(ageAgg); AvgAggregationBuilder ageAvg = AggregationBuilders.avg("ageAvg" ).field("age" ); sourceBuilder.aggregation(ageAvg); AvgAggregationBuilder balanceAvg = AggregationBuilders.avg("balanceAvg" ).field("balance" ); sourceBuilder.aggregation(balanceAvg); System.out.println("检索条件:" +sourceBuilder); searchRequest.source(sourceBuilder); SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT); System.out.println("检索结果:" +searchResponse); SearchHits hits = searchResponse.getHits(); SearchHit[] searchHits = hits.getHits(); for (SearchHit searchHit : searchHits) { String sourceAsString = searchHit.getSourceAsString(); Account account = JSON.parseObject(sourceAsString, Account.class); System.out.println(account); } Aggregations aggregations = searchResponse.getAggregations(); Terms ageAgg1 = aggregations.get("ageAgg" ); for (Terms.Bucket bucket : ageAgg1.getBuckets()) { String keyAsString = bucket.getKeyAsString(); System.out.println("年龄:" +keyAsString+" ==> " +bucket.getDocCount()); } Avg ageAvg1 = aggregations.get("ageAvg" ); System.out.println("平均年龄:" +ageAvg1.getValue()); Avg balanceAvg1 = aggregations.get("balanceAvg" ); System.out.println("平均薪资:" +balanceAvg1.getValue()); }
可以尝试对比打印的条件和执行结果,和前面的ElasticSearch的检索语句和检索结果进行比较;
其他 1. kibana控制台命令 ctrl+home:回到文档首部;
ctril+end:回到文档尾部。
商品上架 spu在es中的存储模型分析 如果每个sku都存储规格参数,会有冗余存储,因为每个spu对应的sku的规格参数都一样
但是如果将规格参数单独建立索引会出现检索时出现大量数据传输的问题,会阻塞网络
因此我们选用第一种存储模型,以空间换时间
向ES添加商品属性映射 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 PUT product { "mappings" : { "properties" : { "skuId" : { "type" : "long" } , "spuId" : { "type" : "keyword" } , "skuTitle" : { "type" : "text" , "analyzer" : "ik_smart" } , "skuPrice" : { "type" : "keyword" } , "skuImg" : { "type" : "keyword" , "index" : false , "doc_values" : false } , "saleCount" : { "type" : "long" } , "hasStock" : { "type" : "boolean" } , "hotScore" : { "type" : "long" } , "brandId" : { "type" : "long" } , "catalogId" : { "type" : "long" } , "brandName" : { "type" : "keyword" , "index" : false , "doc_values" : false } , "brandImg" : { "type" : "keyword" , "index" : false , "doc_values" : false } , "catalogName" : { "type" : "keyword" , "index" : false , "doc_values" : false } , "attrs" : { "type" : "nested" , "properties" : { "attrId" : { "type" : "long" } , "attrName" : { "type" : "keyword" , "index" : false , "doc_values" : false } , "attrValue" : { "type" : "keyword" } } } } } }
商品上架接口实现 在SpuInfoController中添加商品上架功能的方法
1 2 3 4 5 6 7 8 9 @PostMapping("{spuId}/up") public R spuUp (@PathVariable("spuId") Long spuId) {spuInfoService.up(spuId); return R.ok();}
商品上架需要在es中保存spu信息并更新spu的状态信息,由于SpuInfoEntity
与索引的数据模型并不对应,所以我们要建立专门的vo进行数据传输
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 @Data public class SkuEsModel { private Long skuId; private Long spuId; private String skuTitle; private BigDecimal skuPrice; private String skuImg; private Long saleCount; private boolean hasStock; private Long hotScore; private Long brandId; private Long catalogId; private String brandName; private String brandImg; private String catalogName; private List<Attrs> attrs; @Data public static class Attrs { private Long attrId; private String attrName; private String attrValue; } }
编写商品上架的接口
由于每个spu对应的各个sku的规格参数相同,因此我们要将查询规格参数提前,只查询一次
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 public void upSpuForSearch (Long spuId) { List<SkuInfoEntity> skuInfoEntities=skuInfoService.getSkusBySpuId(spuId); List<ProductAttrValueEntity> productAttrValueEntities = productAttrValueService.list(new QueryWrapper <ProductAttrValueEntity>().eq("spu_id" , spuId)); List<Long> attrIds = productAttrValueEntities.stream().map(attr -> { return attr.getAttrId(); }).collect(Collectors.toList()); List<Long> searchIds=attrService.selectSearchAttrIds(attrIds); Set<Long> ids = new HashSet <>(searchIds); List<SkuEsModel.Attr> searchAttrs = productAttrValueEntities.stream().filter(entity -> { return ids.contains(entity.getAttrId()); }).map(entity -> { SkuEsModel.Attr attr = new SkuEsModel .Attr(); BeanUtils.copyProperties(entity, attr); return attr; }).collect(Collectors.toList()); Map<Long, Boolean> stockMap = null ; try { List<Long> longList = skuInfoEntities.stream().map(SkuInfoEntity::getSkuId).collect(Collectors.toList()); List<SkuHasStockVo> skuHasStocks = wareFeignService.getSkuHasStocks(longList); stockMap = skuHasStocks.stream().collect(Collectors.toMap(SkuHasStockVo::getSkuId, SkuHasStockVo::getHasStock)); }catch (Exception e){ log.error("远程调用库存服务失败,原因{}" ,e); } Map<Long, Boolean> finalStockMap = stockMap; List<SkuEsModel> skuEsModels = skuInfoEntities.stream().map(sku -> { SkuEsModel skuEsModel = new SkuEsModel (); BeanUtils.copyProperties(sku, skuEsModel); skuEsModel.setSkuPrice(sku.getPrice()); skuEsModel.setSkuImg(sku.getSkuDefaultImg()); skuEsModel.setHotScore(0L ); BrandEntity brandEntity = brandService.getById(sku.getBrandId()); skuEsModel.setBrandName(brandEntity.getName()); skuEsModel.setBrandImg(brandEntity.getLogo()); CategoryEntity categoryEntity = categoryService.getById(sku.getCatalogId()); skuEsModel.setCatalogName(categoryEntity.getName()); skuEsModel.setAttrs(searchAttrs); skuEsModel.setHasStock(finalStockMap==null ?false :finalStockMap.get(sku.getSkuId())); return skuEsModel; }).collect(Collectors.toList()); R r = searchFeignService.saveProductAsIndices(skuEsModels); if (r.getCode()==0 ){ this .baseMapper.upSpuStatus(spuId, ProductConstant.ProductStatusEnum.SPU_UP.getCode()); }else { log.error("商品远程es保存失败" ); } }
商城系统首页 导入依赖 前端使用了thymeleaf开发,因此要导入该依赖,并且为了改动页面实时生效导入devtools
1 2 3 4 5 6 7 8 9 10 <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-devtools</artifactId > <optional > true</optional > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-thymeleaf</artifactId > </dependency >
渲染一级分类菜单 由于访问首页时就要加载一级目录,所以我们需要在加载首页时获取该数据
1 2 3 4 5 6 7 @GetMapping({"/", "index.html"}) public String indexPage (Model model) { List<CategoryEntity> catagories = categoryService.getLevel1Catagories(); model.addAttribute("catagories" , catagories); return "index" ; }
页面遍历菜单数据
1 2 3 <li th:each ="catagory:${catagories}" > <a href ="#" class ="header_main_left_a" ctg-data ="3" th:attr ="ctg-data=${catagory.catId}" > <b th:text ="${catagory.name}" > </b > </a > </li >
渲染二级三级分类菜单 首先创建一个VO类表示二级和三级菜单
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 @Data @AllArgsConstructor @NoArgsConstructor public class Catelog2Vo { private String catalog1Id; private String id; private String name; private List<Catelog3Vo> catalog3List; @Data @AllArgsConstructor @NoArgsConstructor public static class Catelog3Vo { private String catalog2Id; private String id; private String name; } }
注意其中的catalog1Id属性和catalog2Id属性
1 2 3 4 5 @GetMapping("index/catelog.json") @ResponseBodye public Map<String, List<Catelog2Vo>> getCategoryJson () { return categoryService.getCategoryJson(); }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 @Data @AllArgsConstructor @NoArgsConstructor public class Catelog2Vo { private String catelog1Id; private String id; private String name; private List<Catelog3Vo> catelog3List; @Data @AllArgsConstructor @NoArgsConstructor public static class Catelog3Vo { private String catelog2Id; private String id; private String name; } }
修改resources/static/index下的catalogLoader.js文件中的访问路径为index/catelog.json
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @Override public Map<String, List<Catelog2Vo>> getCategoryJson () { List<CategoryEntity> level1Categories = getLevel1Catagories(); Map<String, List<Catelog2Vo>> parent_cid = level1Categories.stream().collect(Collectors.toMap(k -> k.getCatId().toString(), v -> { List<CategoryEntity> categoryEntities = baseMapper.selectList(new QueryWrapper <CategoryEntity>().eq("parent_cid" , v.getCatId())); List<Catelog2Vo> catelog2Vos = null ; if (categoryEntities != null ) { catelog2Vos = categoryEntities.stream().map(l2 -> { Catelog2Vo catelog2Vo = new Catelog2Vo (v.getCatId().toString(), l2.getCatId().toString(), l2.getName(), null ); List<CategoryEntity> level3Catelog = baseMapper.selectList(new QueryWrapper <CategoryEntity>().eq("parent_cid" , l2.getCatId())); if (level3Catelog != null ) { List<Catelog2Vo.Catelog3Vo> collect = level3Catelog.stream().map(l3 -> { Catelog2Vo.Catelog3Vo catelog3Vo = new Catelog2Vo .Catelog3Vo(l2.getCatId().toString(), l3.getCatId().toString(), l3.getName()); return catelog3Vo; }).collect(Collectors.toList()); catelog2Vo.setCatalog3List(collect); } return catelog2Vo; }).collect(Collectors.toList()); } return catelog2Vos; })); return parent_cid; }
搭建域名访问环境 1. 正向代理与反向代理
nginx就是通过反向代理实现负载均衡
2. Nginx配置文件
nginx.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; # event块 events { worker_connections 1024; } # http块 http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }
/etc/nginx/conf.d/default.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 # /etc/nginx/conf.d/default.conf 的server块 server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
3. Nginx+Windows搭建域名访问环境
修改windows hosts文件改变本地域名映射,将gulimall.com
映射到虚拟机ip
修改nginx的根配置文件nginx.conf,将upstream
映射到我们的网关服务
1 2 3 upstream gulimall{ server 192.168.56.1:88; }
修改nginx的server块配置文件gulimall.conf
,将以/
开头的请求转发至我们配好的gulimall
的upstream
,由于nginx的转发会丢失host
头,所以我们添加头信息
1 2 3 4 location / { proxy_pass http://gulimall; proxy_set_header Host $host; }
配置网关服务,将域名为.gulimall.com
转发至商品服务
1 2 3 4 - id: gulimall_host uri: lb://gulimall-product predicates: - Host=**.gulimall.com
性能压测与优化 1. 压测工具与环境
注:简单业务仅返回一个字符串
压测内容
压测线程数
吞吐量/s
90%响应时间
99%响应时间
Nginx
50
6355
4
235
Gateway
50
14355
5
23
简单服务
50
27373
3
5
首页一级菜单渲染
50
252(db,thymeleaf)
241
316
首页菜单渲染(开缓存)
50
640
100
179
首页菜单渲染(开缓存、优化数据库、关日志)
50
1204
50
85
三级分类数据获取
50
5(db)
10132
10275
三级分类(加索引)
50
15(加索引)
3715
3871
三级分类(优化业务)
50
285
205
313
三级分类(redis缓存)
50
658
97
121
首页全量数据获取
50
2.5(静态资源)
34096
35168
首页全量数据获取(动静分类)
50
7
3977
5215
Gateway+简单服务
50
6200
13
34
全链路(Nginx+GateWay+简单服务)
50
1539
46
66
中间件越多,性能损失越大,大多都损失在网络交互了;
业务:
Db(MySQL 优化)
模板的渲染速度(缓存)
静态资源
2. 首页菜单渲染优化数据库 优化数据库前
1 2 3 4 5 6 public List<CategoryEntity> getLevel1Catagories () { long start = System.currentTimeMillis(); List<CategoryEntity> parent_cid = this .list(new QueryWrapper <CategoryEntity>().eq("parent_cid" , 0 )); System.out.println("查询一级菜单时间:" +(System.currentTimeMillis()-start)); return parent_cid; }
给parent_cid
添加索引后
但是整体业务和吞吐量并没有优化,可能由于使用了远程数据库,通信时间较长?
3. 三级分类(优化业务) 优化前
对二级菜单的每次遍历都需要查询数据库,浪费大量资源
优化后
仅查询一次数据库,剩下的数据通过遍历得到并封装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 @Override public Map<String, List<Catelog2Vo>> getCategoryJson () { List<CategoryEntity> selectList = baseMapper.selectList(null ); List<CategoryEntity> level1Categories = getParent_cid(selectList, 0L ); Map<String, List<Catelog2Vo>> parent_cid = level1Categories.stream().collect(Collectors.toMap(k -> k.getCatId().toString(), v -> { List<CategoryEntity> categoryEntities = getParent_cid(selectList,v.getCatId()); List<Catelog2Vo> catelog2Vos = null ; if (categoryEntities != null ) { catelog2Vos = categoryEntities.stream().map(l2 -> { Catelog2Vo catelog2Vo = new Catelog2Vo (v.getCatId().toString(), l2.getCatId().toString(), l2.getName(), null ); List<CategoryEntity> level3Catelog = getParent_cid(selectList,l2.getCatId()); if (level3Catelog != null ) { List<Catelog2Vo.Catelog3Vo> collect = level3Catelog.stream().map(l3 -> { Catelog2Vo.Catelog3Vo catelog3Vo = new Catelog2Vo .Catelog3Vo(l2.getCatId().toString(), l3.getCatId().toString(), l3.getName()); return catelog3Vo; }).collect(Collectors.toList()); catelog2Vo.setCatalog3List(collect); } return catelog2Vo; }).collect(Collectors.toList()); } return catelog2Vos; })); return parent_cid; } private List<CategoryEntity> getParent_cid (List<CategoryEntity> selectList, Long parent_cid) { List<CategoryEntity> collect = selectList.stream().filter(item -> item.getParentCid() == parent_cid).collect(Collectors.toList()); return collect; }
4. Nginx动静分类 由于动态资源和静态资源目前都处于服务端,所以为了减轻服务器压力,我们将js、css、img等静态资源放置在Nginx端,以减轻服务器压力
在nginx的html文件夹创建staic文件夹,并将index/css等静态资源全部上传到该文件夹中
修改index.html的静态资源路径,使其全部带有static前缀src="/static/index/img/img_09.png"
修改nginx的配置文件/mydata/nginx/conf/conf.d/gulimall.conf
如果遇到有/static
为前缀的请求,转发至html文件夹
1 2 3 4 5 6 7 8 location /static { root /usr/share/nginx/html; } location / { proxy_pass http://gulimall; proxy_set_header Host $host; }
缓存 1. 本地缓存 1) 使用hashmap本地缓存 1 2 3 4 5 6 7 8 9 10 11 12 private Map<String,Object> cache=new HashMap <>();public Map<String, List<Catalog2Vo>> getCategoryMap () { Map<String, List<Catalog2Vo>> catalogMap = (Map<String, List<Catalog2Vo>>) cache.get("catalogMap" ); if (catalogMap == null ) { catalogMap = getCategoriesDb(); cache.put("catalogMap" ,catalogMap); } return catalogMap; }
2) 整合redis进行测试 导入依赖
1 2 3 4 <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-data-redis</artifactId > </dependency >
配置redis主机地址
1 2 3 4 spring: redis: host: port: 6379
使用springboot自动配置的RedisTemplate优化菜单获取业务
1 2 3 4 5 6 7 8 9 10 ValueOperations<String, String> ops = stringRedisTemplate.opsForValue(); String catalogJson = ops.get("catalogJson" );if (catalogJson == null ) { Map<String, List<Catalog2Vo>> categoriesDb = getCategoriesDb(); String toJSONString = JSON.toJSONString(categoriesDb); ops.set("catalogJson" ,toJSONString); return categoriesDb; } Map<String, List<Catalog2Vo>> listMap = JSON.parseObject(catalogJson, new TypeReference <Map<String, List<Catalog2Vo>>>() {}); return listMap;
内存泄漏及解决办法
当进行压力测试时后期后出现堆外内存溢出OutOfDirectMemoryError
产生原因:
1)、springboot2.0以后默认使用lettuce操作redis的客户端,它使用通信
2)、lettuce的bug导致netty堆外内存溢出
解决方案:由于是lettuce的bug造成,不能直接使用-Dio.netty.maxDirectMemory去调大虚拟机堆外内存
1)、升级lettuce客户端。 2)、切换使用jedis
1 2 3 4 5 6 7 8 9 10 11 12 13 14 <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-data-redis</artifactId > <exclusions > <exclusion > <groupId > io.lettuce</groupId > <artifactId > lettuce-core</artifactId > </exclusion > </exclusions > </dependency > <dependency > <groupId > redis.clients</groupId > <artifactId > jedis</artifactId > </dependency >
3) 高并发下缓存失效问题 缓存穿透
指查询一个一定不存在的数据,由于缓存是不命中,将去查询数据库,但是数据库也无此记录,我们没有将这次查询的null写入缓存,这将导致这个不存在的数据每次请求都要到存储层去查询,失去了缓存的意义
风险: 利用不存在的数据进行攻击,数据库瞬时压力增大,最终导致崩溃
解决: null结果缓存,并加入短暂过期时间
缓存雪崩
缓存雪崩是指在我们设置缓存时key采用了相同的过期时间,导致缓存在某一时刻同时失效,请求全部转发到DB,DB瞬时 压力过重雪崩。
解决: 原有的失效时间基础上增加一个随机值,比如1-5分钟随机,这样每一个缓存的过期时间的重复率就会降低,就很难引发集体失效的事件。
缓存击穿
解决: 加锁。大量并发只让一个去查,其他人等待,查到以后释放锁,其他人获取到锁,先查缓存,就会有数据,不用去db
4) 加锁解决缓存击穿问题 将查询db的方法加锁,这样在同一时间只有一个方法能查询数据库,就能解决缓存击穿的问题了
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 @Override public Map<String, List<Catelog2Vo>> getCategoryJson () { String catelogJson = redisTemplate.opsForValue().get("catelogJson" ); if (catelogJson == null ) { System.out.println("缓存没有命中..进入getCategoryJsonFromDB方法" ); Map<String, List<Catelog2Vo>> categoriesDb = getCategoryJsonFromDB(); String toJSONString = JSON.toJSONString(categoriesDb); redisTemplate.opsForValue().set("catelogJson" ,toJSONString); return categoriesDb; } System.out.println("缓存命中...." ); Map<String, List<Catelog2Vo>> listMap = JSON.parseObject(catelogJson, new TypeReference <Map<String, List<Catelog2Vo>>>() {}); return listMap; } public Map<String, List<Catelog2Vo>> getCategoryJsonFromDB () { synchronized (this ){ String catelogJson = redisTemplate.opsForValue().get("catelogJson" ); if (!StringUtils.isEmpty(catelogJson)) { Map<String, List<Catelog2Vo>> listMap = JSON.parseObject(catelogJson, new TypeReference <Map<String, List<Catelog2Vo>>>() {}); return listMap; } System.out.println("查询数据库........" ); ...... }
5) 锁时序问题 在上述方法中,我们将业务逻辑中的确认缓存没有
和查数据库
放到了锁里,但是最终控制台却打印了两次查询了数据库。这是因为在将结果放入缓存的这段时间里,有其他线程确认缓存没有,又再次查询了数据库,因此我们要将结果放入缓存
也进行加锁
优化代码逻辑后
1 2 3 4 5 6 7 8 9 10 11 12 13 14 public Map<String, List<Catelog2Vo>> getCategoryJsonFromDB () { synchronized (this ){ String catelogJson = redisTemplate.opsForValue().get("catelogJson" ); if (!StringUtils.isEmpty(catelogJson)) { Map<String, List<Catelog2Vo>> listMap = JSON.parseObject(catelogJson, new TypeReference <Map<String, List<Catelog2Vo>>>() {}); return listMap; } System.out.println("查询数据库........" ); ...... String toJSONString = JSON.toJSONString(parent_cid); redisTemplate.opsForValue().set("catelogJson" ,toJSONString); return parent_cid; }
优化后多线程访问时仅查询一次数据库
2. 分布式缓存 1) 本地缓存面临问题 当有多个服务存在时,每个服务的缓存仅能够为本服务使用,这样每个服务都要查询一次数据库,并且当数据更新时只会更新单个服务的缓存数据,就会造成数据不一致的问题
所有的服务都到同一个redis进行获取数据,就可以避免这个问题
2) 分布式锁 当分布式项目在高并发下也需要加锁,但本地锁只能锁住当前服务,这个时候就需要分布式锁
3) 分布式锁的演进 基本原理
我们可以同时去一个地方“占坑”,如果占到,就执行逻辑。否则就必须等待,直到释放锁。“占坑”可以去redis,可以去数据库,可以去任何大家都能访问的地方。等待可以自旋的方式。
阶段一
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 public Map<String, List<Catalog2Vo>> getCategoryJsonFromDBWithRedisLock () { Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock" , "111" ); if (lock) { Map<String, List<Catalog2Vo>> categoriesDb = getCategoryMap(); redisTemplate.delete("lock" ); return categoriesDb; }else { try { Thread.sleep(100 ); } catch (InterruptedException e) { e.printStackTrace(); } return getCatelogJsonDbWithRedisLock(); } } public Map<String, List<Catalog2Vo>> getCategoryMap () { ValueOperations<String, String> ops = redisTemplate.opsForValue(); String catalogJson = ops.get("catalogJson" ); if (StringUtils.isEmpty(catalogJson)) { System.out.println("缓存不命中,准备查询数据库。。。" ); Map<String, List<Catalog2Vo>> categoriesDb= getCategoryJsonFromDBWithRedisLock(); String toJSONString = JSON.toJSONString(categoriesDb); ops.set("catalogJson" , toJSONString); return categoriesDb; } System.out.println("缓存命中。。。。" ); Map<String, List<Catalog2Vo>> listMap = JSON.parseObject(catalogJson, new TypeReference <Map<String, List<Catalog2Vo>>>() {}); return listMap; }
问题: 1、setnx占好了位,业务代码异常或者程序在页面过程中宕机。没有执行删除锁逻辑,这就造成了死锁
解决:设置锁的自动过期,即使没有删除,会自动删除
阶段二
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 public Map<String, List<Catelog2Vo>> getCategoryJsonFromDBWithRedisLock () { Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock" , "111" ); if (lock){ redisTemplate.expire("lock" , 30 , TimeUnit.SECONDS); Map<String, List<Catelog2Vo>> dataFromDB = getDataFromDB(); redisTemplate.delete("lock" ); return dataFromDB; }else { try { Thread.sleep(100 ); } catch (InterruptedException e) { e.printStackTrace(); } return getCategoryJsonFromDBWithLocalLock(); } }
问题: 1、setnx设置好,正要去设置过期时间,宕机。又死锁了。 解决: 设置过期时间和占位必须是原子的。redis支持使用setnx ex命令
阶段三
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 public Map<String, List<Catelog2Vo>> getCategoryJsonFromDBWithRedisLock () { Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock" , "111" ,5 , TimeUnit.SECONDS); if (lock){ Map<String, List<Catelog2Vo>> dataFromDB = getDataFromDB(); redisTemplate.delete("lock" ); return dataFromDB; }else { try { Thread.sleep(100 ); } catch (InterruptedException e) { e.printStackTrace(); } return getCategoryJsonFromDBWithLocalLock(); } }
问题: 1、删除锁直接删除??? 如果由于业务时间很长,锁自己过期了,我们直接删除,有可能把别人正在持有的锁删除了。 解决: 占锁的时候,值指定为uuid,每个人匹配是自己的锁才删除。
阶段四
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 public Map<String, List<Catelog2Vo>> getCategoryJsonFromDBWithRedisLock () { String uuid = UUID.randomUUID().toString(); Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock" , uuid, 5 , TimeUnit.SECONDS); if (lock) { Map<String, List<Catelog2Vo>> dataFromDB = getDataFromDB(); String lockValue = redisTemplate.opsForValue().get("lock" ); if (uuid.equals(lockValue)) { redisTemplate.delete("lock" ); } return dataFromDB; }else { try { Thread.sleep(100 ); } catch (InterruptedException e) { e.printStackTrace(); } return getCategoryJsonFromDBWithLocalLock(); } }
问题: 1、如果正好判断是当前值,正要删除锁的时候,锁已经过期,别人已经设置到了新的值。那么我们删除的是别人的锁 解决: 删除锁必须保证原子性。使用redis+Lua脚本完成
阶段五-最终形态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 public Map<String, List<Catelog2Vo>> getCategoryJsonFromDBWithRedisLock () { String uuid = UUID.randomUUID().toString(); Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock" , uuid, 5 , TimeUnit.SECONDS); if (lock) { System.out.println("获取分布式锁成功..." ); Map<String, List<Catelog2Vo>> dataFromDB; try { dataFromDB = getDataFromDB(); }finally { String script = "if redis.call('get',KEYS[1]) == ARGV[1] then\n" + " return redis.call('del',KEYS[1])\n" + "else\n" + " return 0\n" + "end" ; redisTemplate.execute(new DefaultRedisScript <Long>(script, Long.class), Arrays.asList("lock" ), uuid); } return dataFromDB; }else { System.out.println("获取分布式锁失败...等待重试..." ); try { Thread.sleep(100 ); } catch (InterruptedException e) { e.printStackTrace(); } return getCategoryJsonFromDBWithLocalLock(); } }
保证加锁【占位+过期时间】和删除锁【判断+删除】的原子性。更难的事情,锁的自动续期
4) Redisson Redisson是一个在Redis的基础上实现的Java驻内存数据网格(In-Memory Data Grid)。它不仅提供了一系列的分布式的Java常用对象,还提供了许多分布式服务。其中包括(BitSet
, Set
, Multimap
, SortedSet
, Map
, List
, Queue
, BlockingQueue
, Deque
, BlockingDeque
, Semaphore
, Lock
, AtomicLong
, CountDownLatch
, Publish / Subscribe
, Bloom filter
, Remote service
, Spring cache
, Executor service
, Live Object service
, Scheduler service
) Redisson提供了使用Redis的最简单和最便捷的方法。Redisson的宗旨是促进使用者对Redis的关注分离(Separation of Concern),从而让使用者能够将精力更集中地放在处理业务逻辑上。
本文我们仅关注分布式锁的实现,更多请参考官方文档
(1) 环境搭建 导入依赖
1 2 3 4 5 <dependency > <groupId > org.redisson</groupId > <artifactId > redisson</artifactId > <version > 3.13.4</version > </dependency >
开启配置
1 2 3 4 5 6 7 8 9 10 @Configuration public class RedissonConfig { @Bean public RedissonClient redissonClient () { Config config = new Config (); config.useSingleServer().setAddress("redis://192.168.56.10:6379" ); RedissonClient redisson = Redisson.create(config); return redisson; } }
(2) 可重入锁(Reentrant Lock) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 @GetMapping("/hello") @ResponseBody public String hello () { RLock lock = redissonClient.getLock("my-lock" ); lock.lock(); try { System.out.println("加锁成功,执行业务..." +Thread.currentThread().getId()); Thread.sleep(30000 ); } catch (InterruptedException e) { e.printStackTrace(); }finally { System.out.println("释放锁..." ); lock.unlock(); } return "hello" ; }
如果负责储存这个分布式锁的Redisson节点宕机以后,而且这个锁正好处于锁住的状态时,这个锁会出现锁死的状态。为了避免这种情况的发生,所以就设置了过期时间,但是如果业务执行时间过长,业务还未执行完锁就已经过期,那么就会出现解锁时解了其他线程的锁的情况。
所以Redisson内部提供了一个监控锁的看门狗,它的作用是在Redisson实例被关闭前,不断的延长锁的有效期。默认情况下,看门狗的检查锁的超时时间是30秒钟,也可以通过修改Config.lockWatchdogTimeout 来另行指定。
在本次测试中lock
的初始过期时间TTL为30s,但是每到1/3看门狗时间就会自动续借成30s
另外Redisson还通过加锁的方法提供了leaseTime
的参数来指定加锁的时间。超过这个时间后锁便自动解开了。不会自动续期!
1 2 3 4 5 6 7 8 9 10 11 12 13 lock.lock(10 , TimeUnit.SECONDS); boolean res = lock.tryLock(100 , 10 , TimeUnit.SECONDS);if (res) { try { ... } finally { lock.unlock(); } }
(3) 读写锁(ReadWriteLock) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 @GetMapping("/write") @ResponseBody public String writeValue () { RReadWriteLock readWriteLock = redissonClient.getReadWriteLock("rw-lock" ); String uuid = "" ; RLock rLock = readWriteLock.writeLock(); try { rLock.lock(); uuid = UUID.randomUUID().toString(); Thread.sleep(30000 ); redisTemplate.opsForValue().set("writeValue" ,uuid); } catch (InterruptedException e) { e.printStackTrace(); }finally { rLock.unlock(); } return uuid; } @GetMapping("/read") @ResponseBody public String readValue () { RReadWriteLock readWriteLock = redissonClient.getReadWriteLock("rw-lock" ); RLock rLock = readWriteLock.readLock(); String res = "" ; try { rLock.lock(); res = redisTemplate.opsForValue().get("writeValue" ); } catch (Exception e) { e.printStackTrace(); }finally { rLock.unlock(); } return res; }
写锁会阻塞读锁,但是读锁不会阻塞读锁,但读锁会阻塞写锁
总之含有写的过程都会被阻塞,只有读读不会被阻塞
上锁时在redis的状态
(4) 信号量(Semaphore) 信号量为存储在redis中的一个数字,当这个数字大于0时,即可以调用acquire()
方法增加数量,也可以调用release()
方法减少数量,但是当调用release()
之后小于0的话方法就会阻塞,直到数字大于0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 @GetMapping("/park") @ResponseBody public String park () { RSemaphore park = redissonClient.getSemaphore("park" ); try { park.acquire(2 ); } catch (InterruptedException e) { e.printStackTrace(); } return "停进2" ; } @GetMapping("/go") @ResponseBody public String go () { RSemaphore park = redissonClient.getSemaphore("park" ); park.release(2 ); return "开走2" ; }
(5) 闭锁(CountDownLatch) 可以理解为门栓,使用若干个门栓将当前方法阻塞,只有当全部门栓都被放开时,当前方法才能继续执行。
以下代码只有offLatch()
被调用5次后 setLatch()
才能继续执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 @GetMapping("/setLatch") @ResponseBody public String setLatch () { RCountDownLatch latch = redissonClient.getCountDownLatch("CountDownLatch" ); try { latch.trySetCount(5 ); latch.await(); } catch (InterruptedException e) { e.printStackTrace(); } return "门栓被放开" ; } @GetMapping("/offLatch") @ResponseBody public String offLatch () { RCountDownLatch latch = redissonClient.getCountDownLatch("CountDownLatch" ); latch.countDown(); return "门栓被放开1" ; }
3. 缓存数据的一致性 1) 双写模式 当数据更新时,更新数据库时同时更新缓存
存在问题
由于卡顿等原因,导致写缓存2在最前,写缓存1在后面就出现了不一致
这是暂时性的脏数据问题,但是在数据稳定,缓存过期以后,又能得到最新的正确数据
2) 失效模式 数据库更新时将缓存删除
存在问题
当两个请求同时修改数据库,一个请求已经更新成功并删除缓存时又有读数据的请求进来,这时候发现缓存中无数据就去数据库中查询并放入缓存,在放入缓存前第二个更新数据库的请求成功,这时候留在缓存中的数据依然是第一次数据更新的数据
解决方法
1、缓存的所有数据都有过期时间,数据过期下一次查询触发主动更新 2、读写数据的时候(并且写的不频繁),加上分布式的读写锁。
3) 解决方案 无论是双写模式还是失效模式,都会导致缓存的不一致问题。即多个实例同时更新会出事。怎么办?
如果是用户纬度数据(订单数据、用户数据),这种并发几率非常小,不用考虑这个问题,缓存数据加上过期时间,每隔一段时间触发读的主动更新即可
如果是菜单,商品介绍等基础数据,也可以去使用canal订阅binlog的方式。
缓存数据+过期时间也足够解决大部分业务对于缓存的要求。
通过加锁保证并发读写,写写的时候按顺序排好队。读读无所谓。所以适合使用读写锁。(业务不关心脏数据,允许临时脏数据可忽略);
总结:
我们能放入缓存的数据本就不应该是实时性、一致性要求超高的。所以缓存数据的时候加上过期时间,保证每天拿到当前最新数据即可。
我们不应该过度设计,增加系统的复杂性
遇到实时性、一致性要求高的数据,就应该查数据库,即使慢点。
4. SpringCache 1) 导入依赖 1 2 3 4 <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-cache</artifactId > </dependency >
2) 自定义配置 指定缓存类型并在主配置类上加上注解@EnableCaching
在application.properties
文件中可进行如下配置:
1 2 3 4 5 6 7 8 9 spring.cache.type=redis spring.cache.redis.time-to-live=3600000 spring.cache.redis.use-key-prefix=true spring.cache.redis.cache-null-values=true
默认使用jdk进行序列化,自定义序列化方式需要编写配置类
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 @Configuration public class MyCacheConfig { @Bean public org.springframework.data.redis.cache.RedisCacheConfiguration redisCacheConfiguration ( CacheProperties cacheProperties) { CacheProperties.Redis redisProperties = cacheProperties.getRedis(); org.springframework.data.redis.cache.RedisCacheConfiguration config = org.springframework.data.redis.cache.RedisCacheConfiguration .defaultCacheConfig(); config = config.serializeValuesWith( RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer ())); if (redisProperties.getTimeToLive() != null ) { config = config.entryTtl(redisProperties.getTimeToLive()); } if (redisProperties.getKeyPrefix() != null ) { config = config.prefixKeysWith(redisProperties.getKeyPrefix()); } if (!redisProperties.isCacheNullValues()) { config = config.disableCachingNullValues(); } if (!redisProperties.isUseKeyPrefix()) { config = config.disableKeyPrefix(); } return config; } }
3) 自定义序列化原理 缓存使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 @Cacheable(value = {"category"},key = "#root.method.name",sync = true) public Map<String, List<Catalog2Vo>> getCatalogJsonDbWithSpringCache () { return getCategoriesDb(); } @Override @CacheEvict(value = {"category"},allEntries = true) public void updateCascade (CategoryEntity category) { this .updateById(category); if (!StringUtils.isEmpty(category.getName())) { categoryBrandRelationService.updateCategory(category); } }
4) Spring-Cache的不足之处 1)、读模式
缓存穿透:查询一个null数据。解决方案:缓存空数据,可通过spring.cache.redis.cache-null-values=true
缓存击穿:大量并发进来同时查询一个正好过期的数据。解决方案:加锁 ? 默认是无加锁的;
使用sync = true来解决击穿问题
缓存雪崩:大量的key同时过期。解决:加随机时间。加上过期时间
2)、写模式:(缓存与数据库一致)
a、读写加锁。
b、引入Canal,感知到MySQL的更新去更新Redis
c 、读多写多,直接去数据库查询就行
3)、总结:
常规数据(读多写少,即时性,一致性要求不高的数据,完全可以使用Spring-Cache):
写模式(只要缓存的数据有过期时间就足够了)
特殊数据:特殊设计
检索 1. 检索条件分析
完整查询参数keyword=小米&sort=saleCount_desc/asc&hasStock=0/1&skuPrice=400_1900&brandId=1&catalog3Id=1&at trs=1_3G:4G:5G&attrs=2_骁龙845&attrs=4_高清屏
2. DSL分析 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 GET gulimall_product/_search { "query" : { "bool" : { "must" : [ { "match" : { "skuTitle" : "华为" } } ] , "filter" : [ { "term" : { "catalogId" : "225" } } , { "terms" : { "brandId" : [ "2" ] } } , { "term" : { "hasStock" : "false" } } , { "range" : { "skuPrice" : { "gte" : 1000 , "lte" : 7000 } } } , { "nested" : { "path" : "attrs" , "query" : { "bool" : { "must" : [ { "term" : { "attrs.attrId" : { "value" : "6" } } } ] } } } } ] } } , "sort" : [ { "skuPrice" : { "order" : "desc" } } ] , "from" : 0 , "size" : 5 , "highlight" : { "fields" : { "skuTitle" : { } } , "pre_tags" : "<b style='color:red'>" , "post_tags" : "</b>" } , "aggs" : { "brandAgg" : { "terms" : { "field" : "brandId" , "size" : 10 } , "aggs" : { "brandNameAgg" : { "terms" : { "field" : "brandName" , "size" : 10 } } , "brandImgAgg" : { "terms" : { "field" : "brandImg" , "size" : 10 } } } } , "catalogAgg" : { "terms" : { "field" : "catalogId" , "size" : 10 } , "aggs" : { "catalogNameAgg" : { "terms" : { "field" : "catalogName" , "size" : 10 } } } } , "attrs" : { "nested" : { "path" : "attrs" } , "aggs" : { "attrIdAgg" : { "terms" : { "field" : "attrs.attrId" , "size" : 10 } , "aggs" : { "attrNameAgg" : { "terms" : { "field" : "attrs.attrName" , "size" : 10 } } } } } } } }
3. 检索代码编写 1) 请求参数和返回结果 请求参数的封装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @Data public class SearchParam { private String keyword; private List<Long> brandId; private Long catalog3Id; private String sort; private Integer hasStock; private String skuPrice; private List<String> attrs; private Integer pageNum = 1 ; private String _queryString; }
返回结果
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 @Data public class SearchResult { private List<SkuEsModel> product; private Integer pageNum; private Long total; private Integer totalPages; private List<Integer> pageNavs; private List<BrandVo> brands; private List<AttrVo> attrs; private List<CatalogVo> catalogs; private List<NavVo> navs; @Data public static class NavVo { private String navName; private String navValue; private String link; } @Data @AllArgsConstructor public static class BrandVo { private Long brandId; private String brandName; private String brandImg; } @Data @AllArgsConstructor public static class AttrVo { private Long attrId; private String attrName; private List<String> attrValue; } @Data @AllArgsConstructor public static class CatalogVo { private Long catalogId; private String catalogName; } }
2) 主体逻辑 主要逻辑在service层进行,service层将封装好的SearchParam
组建查询条件,再将返回后的结果封装成SearchResult
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 @GetMapping(value = {"/search.html","/"}) public String getSearchPage (SearchParam searchParam, Model model, HttpServletRequest request) { searchParam.set_queryString(request.getQueryString()); SearchResult result=searchService.getSearchResult(searchParam); model.addAttribute("result" , result); return "search" ; } public SearchResult getSearchResult (SearchParam searchParam) { SearchResult searchResult= null ; SearchRequest request = bulidSearchRequest(searchParam); try { SearchResponse searchResponse = restHighLevelClient.search(request, GulimallElasticSearchConfig.COMMON_OPTIONS); searchResult = bulidSearchResult(searchParam,searchResponse); } catch (IOException e) { e.printStackTrace(); } return searchResult; }
3) 构建查询条件 这一部分就是对着前面分析的DSL,将每个条件封装进请求中
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 private SearchRequest bulidSearchRequest (SearchParam searchParam) { SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder (); BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder (); if (!StringUtils.isEmpty(searchParam.getKeyword())) { boolQueryBuilder.must(QueryBuilders.matchQuery("skuTitle" , searchParam.getKeyword())); } if (searchParam.getCatalog3Id()!=null ){ boolQueryBuilder.filter(QueryBuilders.termQuery("catalogId" , searchParam.getCatalog3Id())); } if (searchParam.getBrandId()!=null &&searchParam.getBrandId().size()>0 ) { boolQueryBuilder.filter(QueryBuilders.termsQuery("brandId" ,searchParam.getBrandId())); } if (searchParam.getHasStock() != null ) { boolQueryBuilder.filter(QueryBuilders.termQuery("hasStock" , searchParam.getHasStock() == 1 )); } RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("skuPrice" ); if (!StringUtils.isEmpty(searchParam.getSkuPrice())) { String[] prices = searchParam.getSkuPrice().split("_" ); if (prices.length == 1 ) { if (searchParam.getSkuPrice().startsWith("_" )) { rangeQueryBuilder.lte(Integer.parseInt(prices[0 ])); }else { rangeQueryBuilder.gte(Integer.parseInt(prices[0 ])); } } else if (prices.length == 2 ) { if (!prices[0 ].isEmpty()) { rangeQueryBuilder.gte(Integer.parseInt(prices[0 ])); } rangeQueryBuilder.lte(Integer.parseInt(prices[1 ])); } boolQueryBuilder.filter(rangeQueryBuilder); } List<String> attrs = searchParam.getAttrs(); BoolQueryBuilder queryBuilder = new BoolQueryBuilder (); if (attrs!=null &&attrs.size() > 0 ) { attrs.forEach(attr->{ String[] attrSplit = attr.split("_" ); queryBuilder.must(QueryBuilders.termQuery("attrs.attrId" , attrSplit[0 ])); String[] attrValues = attrSplit[1 ].split(":" ); queryBuilder.must(QueryBuilders.termsQuery("attrs.attrValue" , attrValues)); }); } NestedQueryBuilder nestedQueryBuilder = QueryBuilders.nestedQuery("attrs" , queryBuilder, ScoreMode.None); boolQueryBuilder.filter(nestedQueryBuilder); searchSourceBuilder.query(boolQueryBuilder); if (!StringUtils.isEmpty(searchParam.getSort())) { String[] sortSplit = searchParam.getSort().split("_" ); searchSourceBuilder.sort(sortSplit[0 ], sortSplit[1 ].equalsIgnoreCase("asc" ) ? SortOrder.ASC : SortOrder.DESC); } searchSourceBuilder.from((searchParam.getPageNum() - 1 ) * EsConstant.PRODUCT_PAGESIZE); searchSourceBuilder.size(EsConstant.PRODUCT_PAGESIZE); if (!StringUtils.isEmpty(searchParam.getKeyword())) { HighlightBuilder highlightBuilder = new HighlightBuilder (); highlightBuilder.field("skuTitle" ); highlightBuilder.preTags("<b style='color:red'>" ); highlightBuilder.postTags("</b>" ); searchSourceBuilder.highlighter(highlightBuilder); } TermsAggregationBuilder brandAgg = AggregationBuilders.terms("brandAgg" ).field("brandId" ); TermsAggregationBuilder brandNameAgg = AggregationBuilders.terms("brandNameAgg" ).field("brandName" ); TermsAggregationBuilder brandImgAgg = AggregationBuilders.terms("brandImgAgg" ).field("brandImg" ); brandAgg.subAggregation(brandNameAgg); brandAgg.subAggregation(brandImgAgg); searchSourceBuilder.aggregation(brandAgg); TermsAggregationBuilder catalogAgg = AggregationBuilders.terms("catalogAgg" ).field("catalogId" ); TermsAggregationBuilder catalogNameAgg = AggregationBuilders.terms("catalogNameAgg" ).field("catalogName" ); catalogAgg.subAggregation(catalogNameAgg); searchSourceBuilder.aggregation(catalogAgg); NestedAggregationBuilder nestedAggregationBuilder = new NestedAggregationBuilder ("attrs" , "attrs" ); TermsAggregationBuilder attrIdAgg = AggregationBuilders.terms("attrIdAgg" ).field("attrs.attrId" ); TermsAggregationBuilder attrNameAgg = AggregationBuilders.terms("attrNameAgg" ).field("attrs.attrName" ); TermsAggregationBuilder attrValueAgg = AggregationBuilders.terms("attrValueAgg" ).field("attrs.attrValue" ); attrIdAgg.subAggregation(attrNameAgg); attrIdAgg.subAggregation(attrValueAgg); nestedAggregationBuilder.subAggregation(attrIdAgg); searchSourceBuilder.aggregation(nestedAggregationBuilder); log.debug("构建的DSL语句 {}" ,searchSourceBuilder.toString()); SearchRequest request = new SearchRequest (new String []{EsConstant.PRODUCT_INDEX}, searchSourceBuilder); return request; }
4) 封装响应结果 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 private SearchResult bulidSearchResult (SearchParam searchParam, SearchResponse searchResponse) { SearchResult result = new SearchResult (); SearchHits hits = searchResponse.getHits(); if (hits.getHits()!=null &&hits.getHits().length>0 ){ List<SkuEsModel> skuEsModels = new ArrayList <>(); for (SearchHit hit : hits) { String sourceAsString = hit.getSourceAsString(); SkuEsModel skuEsModel = JSON.parseObject(sourceAsString, SkuEsModel.class); if (!StringUtils.isEmpty(searchParam.getKeyword())) { HighlightField skuTitle = hit.getHighlightFields().get("skuTitle" ); String highLight = skuTitle.getFragments()[0 ].string(); skuEsModel.setSkuTitle(highLight); } skuEsModels.add(skuEsModel); } result.setProduct(skuEsModels); } result.setPageNum(searchParam.getPageNum()); long total = hits.getTotalHits().value; result.setTotal(total); Integer totalPages = (int )total % EsConstant.PRODUCT_PAGESIZE == 0 ? (int )total / EsConstant.PRODUCT_PAGESIZE : (int )total / EsConstant.PRODUCT_PAGESIZE + 1 ; result.setTotalPages(totalPages); List<Integer> pageNavs = new ArrayList <>(); for (int i = 1 ; i <= totalPages; i++) { pageNavs.add(i); } result.setPageNavs(pageNavs); List<SearchResult.BrandVo> brandVos = new ArrayList <>(); Aggregations aggregations = searchResponse.getAggregations(); ParsedLongTerms brandAgg = aggregations.get("brandAgg" ); for (Terms.Bucket bucket : brandAgg.getBuckets()) { Long brandId = bucket.getKeyAsNumber().longValue(); Aggregations subBrandAggs = bucket.getAggregations(); ParsedStringTerms brandImgAgg=subBrandAggs.get("brandImgAgg" ); String brandImg = brandImgAgg.getBuckets().get(0 ).getKeyAsString(); Terms brandNameAgg=subBrandAggs.get("brandNameAgg" ); String brandName = brandNameAgg.getBuckets().get(0 ).getKeyAsString(); SearchResult.BrandVo brandVo = new SearchResult .BrandVo(brandId, brandName, brandImg); brandVos.add(brandVo); } result.setBrands(brandVos); List<SearchResult.CatalogVo> catalogVos = new ArrayList <>(); ParsedLongTerms catalogAgg = aggregations.get("catalogAgg" ); for (Terms.Bucket bucket : catalogAgg.getBuckets()) { Long catalogId = bucket.getKeyAsNumber().longValue(); Aggregations subcatalogAggs = bucket.getAggregations(); ParsedStringTerms catalogNameAgg=subcatalogAggs.get("catalogNameAgg" ); String catalogName = catalogNameAgg.getBuckets().get(0 ).getKeyAsString(); SearchResult.CatalogVo catalogVo = new SearchResult .CatalogVo(catalogId, catalogName); catalogVos.add(catalogVo); } result.setCatalogs(catalogVos); List<SearchResult.AttrVo> attrVos = new ArrayList <>(); ParsedNested parsedNested=aggregations.get("attrs" ); ParsedLongTerms attrIdAgg=parsedNested.getAggregations().get("attrIdAgg" ); for (Terms.Bucket bucket : attrIdAgg.getBuckets()) { Long attrId = bucket.getKeyAsNumber().longValue(); Aggregations subAttrAgg = bucket.getAggregations(); ParsedStringTerms attrNameAgg=subAttrAgg.get("attrNameAgg" ); String attrName = attrNameAgg.getBuckets().get(0 ).getKeyAsString(); ParsedStringTerms attrValueAgg = subAttrAgg.get("attrValueAgg" ); List<String> attrValues = new ArrayList <>(); for (Terms.Bucket attrValueAggBucket : attrValueAgg.getBuckets()) { String attrValue = attrValueAggBucket.getKeyAsString(); attrValues.add(attrValue); List<SearchResult.NavVo> navVos = new ArrayList <>(); } SearchResult.AttrVo attrVo = new SearchResult .AttrVo(attrId, attrName, attrValues); attrVos.add(attrVo); } result.setAttrs(attrVos); List<String> attrs = searchParam.getAttrs(); if (attrs != null && attrs.size() > 0 ) { List<SearchResult.NavVo> navVos = attrs.stream().map(attr -> { String[] split = attr.split("_" ); SearchResult.NavVo navVo = new SearchResult .NavVo(); navVo.setNavValue(split[1 ]); try { R r = productFeignService.info(Long.parseLong(split[0 ])); if (r.getCode() == 0 ) { AttrResponseVo attrResponseVo = JSON.parseObject(JSON.toJSONString(r.get("attr" )), new TypeReference <AttrResponseVo>() { }); navVo.setNavName(attrResponseVo.getAttrName()); } } catch (Exception e) { log.error("远程调用商品服务查询属性失败" , e); } String queryString = searchParam.get_queryString(); String replace = queryString.replace("&attrs=" + attr, "" ).replace("attrs=" + attr+"&" , "" ).replace("attrs=" + attr, "" ); navVo.setLink("http://search.gulimall.com/search.html" + (replace.isEmpty()?"" :"?" +replace)); return navVo; }).collect(Collectors.toList()); result.setNavs(navVos); } return result; }
4. 页面效果 1) 基本数据渲染 将商品的基本属性渲染出来
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 <div class ="rig_tab" > <div th:each ="product : ${result.getProduct()}" > <div class ="ico" > <i class ="iconfont icon-weiguanzhu" > </i > <a href ="/static/search/#" > 关注</a > </div > <p class ="da" > <a th:href ="|http://item.gulimall.com/${product.skuId}.html|" > <img class ="dim" th:src ="${product.skuImg}" > </a > </p > <ul class ="tab_im" > <li > <a href ="/static/search/#" title ="黑色" > <img th:src ="${product.skuImg}" > </a > </li > </ul > <p class ="tab_R" > <span th:text ="'¥' + ${product.skuPrice}" > ¥5199.00</span > </p > <p class ="tab_JE" > <a href ="/static/search/#" th:utext ="${product.skuTitle}" > Apple iPhone 7 Plus (A1661) 32G 黑色 移动联通电信4G手机 </a > </p > <p class ="tab_PI" > 已有<span > 11万+</span > 热门评价 <a href ="/static/search/#" > 二手有售</a > </p > <p class ="tab_CP" > <a href ="/static/search/#" title ="谷粒商城Apple产品专营店" > 谷粒商城Apple产品...</a > <a href ='#' title ="联系供应商进行咨询" > <img src ="/static/search/img/xcxc.png" > </a > </p > <div class ="tab_FO" > <div class ="FO_one" > <p > 自营 <span > 谷粒商城自营,品质保证</span > </p > <p > 满赠 <span > 该商品参加满赠活动</span > </p > </div > </div > </div > </div >
2) 筛选条件渲染 将结果的品牌、分类、商品属性进行遍历显示,并且点击某个属性值时可以通过拼接url进行跳转
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 <div class ="JD_nav_logo" > <div class ="JD_nav_wrap" > <div class ="sl_key" > <span > 品牌:</span > </div > <div class ="sl_value" > <div class ="sl_value_logo" > <ul > <li th:each ="brand: ${result.getBrands()}" > <a href ="#" th:href ="${'javascript:searchProducts(" brandId" ,'+brand.brandId+')'}" > <img src ="/static/search/img/598033b4nd6055897.jpg" alt ="" th:src ="${brand.brandImg}" > <div th:text ="${brand.brandName}" > 华为(HUAWEI) </div > </a > </li > </ul > </div > </div > <div class ="sl_ext" > <a href ="#" > 更多 <i style ='background: url("image/search.ele.png")no-repeat 3px 7px' > </i > <b style ='background: url("image/search.ele.png")no-repeat 3px -44px' > </b > </a > <a href ="#" > 多选 <i > +</i > <span > +</span > </a > </div > </div > <div class ="JD_pre" th:each ="catalog: ${result.getCatalogs()}" > <div class ="sl_key" > <span > 分类:</span > </div > <div class ="sl_value" > <ul > <li > <a href ="#" th:text ="${catalog.getCatalogName()}" th:href ="${'javascript:searchProducts(" catalogId" ,'+catalog.catalogId+')'}" > 0-安卓(Android)</a > </li > </ul > </div > </div > <div class ="JD_pre" > <div class ="sl_key" > <span > 价格:</span > </div > <div class ="sl_value" > <ul > <li > <a href ="#" > 0-499</a > </li > <li > <a href ="#" > 500-999</a > </li > <li > <a href ="#" > 1000-1699</a > </li > <li > <a href ="#" > 1700-2799</a > </li > <li > <a href ="#" > 2800-4499</a > </li > <li > <a href ="#" > 4500-11999</a > </li > <li > <a href ="#" > 12000以上</a > </li > <li class ="sl_value_li" > <input type ="text" > <p > -</p > <input type ="text" > <a href ="#" > 确定</a > </li > </ul > </div > </div > <div class ="JD_pre" th:each ="attr: ${result.getAttrs()}" > <div class ="sl_key" > <span th:text ="${attr.getAttrName()}" > 系统:</span > </div > <div class ="sl_value" > <ul > <li th:each ="val: ${attr.getAttrValue()}" > <a href ="#" th:text ="${val}" th:href ="${'javascript:searchProducts(" attrs" ," '+attr.attrId+'_'+val+'" )'}" > 0-安卓(Android)</a > </li > </ul > </div > </div > </div >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 function searchProducts (name, value ) { location.href = replaceParamVal (location.href ,name,value,true ) }; function replaceParamVal (url, paramName, replaceVal,forceAdd ) { var oUrl = url.toString (); var nUrl; if (oUrl.indexOf (paramName) != -1 ) { if ( forceAdd && oUrl.indexOf (paramName+"=" +replaceVal)==-1 ) { if (oUrl.indexOf ("?" ) != -1 ) { nUrl = oUrl + "&" + paramName + "=" + replaceVal; } else { nUrl = oUrl + "?" + paramName + "=" + replaceVal; } } else { var re = eval ('/(' + paramName + '=)([^&]*)/gi' ); nUrl = oUrl.replace (re, paramName + '=' + replaceVal); } } else { if (oUrl.indexOf ("?" ) != -1 ) { nUrl = oUrl + "&" + paramName + "=" + replaceVal; } else { nUrl = oUrl + "?" + paramName + "=" + replaceVal; } } return nUrl; };
3) 分页数据渲染 将页码绑定至属性pn,当点击某页码时,通过获取pn值进行url拼接跳转页面
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 <div class ="filter_page" > <div class ="page_wrap" > <span class ="page_span1" > <a class ="page_a" href ="#" th:if ="${result.pageNum>1}" th:attr ="pn=${result.getPageNum()-1}" > < 上一页 </a > <a href ="#" class ="page_a" th:each ="page: ${result.pageNavs}" th:text ="${page}" th:style ="${page==result.pageNum?'border: 0;color:#ee2222;background: #fff':''}" th:attr ="pn=${page}" > 1</a > <a href ="#" class ="page_a" th:if ="${result.pageNum<result.totalPages}" th:attr ="pn=${result.getPageNum()+1}" > 下一页 > </a > </span > <span class ="page_span2" > <em > 共<b th:text ="${result.totalPages}" > 169</b > 页 到第</em > <input type ="number" value ="1" class ="page_input" > <em > 页</em > <a href ="#" > 确定</a > </span > </div > </div >
1 2 3 4 5 $(".page_a" ).click (function ( ) { var pn=$(this ).attr ("pn" ); location.href =replaceParamVal (location.href ,"pageNum" ,pn,false ); console .log (replaceParamVal (location.href ,"pageNum" ,pn,false )) })
4) 页面排序和价格区间 页面排序功能需要保证,点击某个按钮时,样式会变红,并且其他的样式保持最初的样子;
点击某个排序时首先按升序显示,再次点击再变为降序,并且还会显示上升或下降箭头
页面排序跳转的思路是通过点击某个按钮时会向其class
属性添加/去除desc
,并根据属性值进行url拼接
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 <div class ="filter_top" > <div class ="filter_top_left" th:with ="p = ${param.sort}, priceRange = ${param.skuPrice}" > <a sort ="hotScore" th:class ="${(!#strings.isEmpty(p) && #strings.startsWith(p,'hotScore') && #strings.endsWith(p,'desc')) ? 'sort_a desc' : 'sort_a'}" th:attr ="style=${(#strings.isEmpty(p) || #strings.startsWith(p,'hotScore')) ? 'color: #fff; border-color: #e4393c; background: #e4393c;':'color: #333; border-color: #ccc; background: #fff;' }" > 综合排序[[${(!#strings.isEmpty(p) && #strings.startsWith(p,'hotScore') && #strings.endsWith(p,'desc')) ?'↓':'↑' }]]</a > <a sort ="saleCount" th:class ="${(!#strings.isEmpty(p) && #strings.startsWith(p,'saleCount') && #strings.endsWith(p,'desc')) ? 'sort_a desc' : 'sort_a'}" th:attr ="style=${(!#strings.isEmpty(p) && #strings.startsWith(p,'saleCount')) ? 'color: #fff; border-color: #e4393c; background: #e4393c;':'color: #333; border-color: #ccc; background: #fff;' }" > 销量[[${(!#strings.isEmpty(p) && #strings.startsWith(p,'saleCount') && #strings.endsWith(p,'desc'))?'↓':'↑' }]]</a > <a sort ="skuPrice" th:class ="${(!#strings.isEmpty(p) && #strings.startsWith(p,'skuPrice') && #strings.endsWith(p,'desc')) ? 'sort_a desc' : 'sort_a'}" th:attr ="style=${(!#strings.isEmpty(p) && #strings.startsWith(p,'skuPrice')) ? 'color: #fff; border-color: #e4393c; background: #e4393c;':'color: #333; border-color: #ccc; background: #fff;' }" > 价格[[${(!#strings.isEmpty(p) && #strings.startsWith(p,'skuPrice') && #strings.endsWith(p,'desc'))?'↓':'↑' }]]</a > <a sort ="hotScore" class ="sort_a" > 评论分</a > <a sort ="hotScore" class ="sort_a" > 上架时间</a > <input id ="skuPriceFrom" type ="number" th:value ="${#strings.isEmpty(priceRange)?'':#strings.substringBefore(priceRange,'_')}" style ="width: 100px; margin-left: 30px" > - <input id ="skuPriceTo" type ="number" th:value ="${#strings.isEmpty(priceRange)?'':#strings.substringAfter(priceRange,'_')}" style ="width: 100px" > <button id ="skuPriceSearchBtn" > 确定</button > </div > <div class ="filter_top_right" > <span class ="fp-text" > <b > 1</b > <em > /</em > <i > 169</i > </span > <a href ="#" class ="prev" > <</a > <a href ="#" class ="next" > > </a > </div > </div >
1 2 3 4 5 6 7 8 9 $(".sort_a" ).click (function ( ) { $(this ).toggleClass ("desc" ); let sort = $(this ).attr ("sort" ); sort = $(this ).hasClass ("desc" ) ? sort + "_desc" : sort + "_asc" ; location.href = replaceParamVal (location.href , "sort" , sort,false ); return false ; });
价格区间搜索函数
1 2 3 4 5 $("#skuPriceSearchBtn" ).click (function ( ) { var skuPriceFrom = $("#skuPriceFrom" ).val (); var skuPriceTo = $("#skuPriceTo" ).val (); location.href = replaceParamVal (location.href , "skuPrice" , skuPriceFrom + "_" + skuPriceTo, false ); })
5) 面包屑导航 在封装结果时,将查询的属性值进行封装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 List<String> attrs = searchParam.getAttrs(); if (attrs != null && attrs.size() > 0 ) { List<SearchResult.NavVo> navVos = attrs.stream().map(attr -> { String[] split = attr.split("_" ); SearchResult.NavVo navVo = new SearchResult .NavVo(); navVo.setNavValue(split[1 ]); try { R r = productFeignService.info(Long.parseLong(split[0 ])); if (r.getCode() == 0 ) { AttrResponseVo attrResponseVo = JSON.parseObject(JSON.toJSONString(r.get("attr" )), new TypeReference <AttrResponseVo>() { }); navVo.setNavName(attrResponseVo.getAttrName()); } } catch (Exception e) { log.error("远程调用商品服务查询属性失败" , e); } String queryString = searchParam.get_queryString(); String replace = queryString.replace("&attrs=" + attr, "" ).replace("attrs=" + attr+"&" , "" ).replace("attrs=" + attr, "" ); navVo.setLink("http://search.gulimall.com/search.html" + (replace.isEmpty()?"" :"?" +replace)); return navVo; }).collect(Collectors.toList()); result.setNavs(navVos); }
页面渲染
1 2 3 4 <div class ="JD_ipone_one c" > <a th:href ="${nav.link}" th:each ="nav:${result.navs}" > <span th:text ="${nav.navName}" > </span > :<span th:text ="${nav.navValue}" > </span > x</a > </div >
6) 条件筛选联动 就是将品牌和分类也封装进面包屑数据中,并且在页面进行th:if的判断,当url有该属性的查询条件时就不进行显示了
异步 1. 线程 1) 初始化线程的4 种方式 1)、继承 Thread 2)、实现 Runnable 接口 3)、实现 Callable 接口 + FutureTask (可以拿到返回结果,可以处理异常) 4) 、线程池
方式 1 和方式 2:主进程无法获取线程的运算结果。不适合当前场景 方式 3:主进程可以获取线程的运算结果,但是不利于控制服务器中的线程资源。可以导致服务器资源耗尽。
方式 4:通过如下两种方式初始化线程池
1 Executors .new FiexedThreadPool(3) ;
或者
1 new ThreadPoolExecutor(corePoolSize , maximumPoolSize , keepAliveTime , TimeUnit unit , workQueue , threadFactory , handler ) ;
通过线程池性能稳定,也可以获取执行结果,并捕获异常。但是,在业务复杂情况下,一个异步调用可能会依赖于另一个异步调用的执行结果。
2) 线程池的七大参数
运行流程: 1、线程池创建,准备好 core 数量的核心线程,准备接受任务
2、新的任务进来,用 core 准备好的空闲线程执行。
(1)core 满了,就将再进来的任务放入阻塞队列中。空闲的 core 就会自己去阻塞队列获取任务执行 (2)阻塞队列满了,就直接开新线程执行,最大只能开到 max 指定的数量 (3)max 都执行好了。Max-core 数量空闲的线程会在 keepAliveTime 指定的时间后自动销毁。最终保持到 core 大小 (4)如果线程数开到了 max 的数量,还有新任务进来,就会使用 reject 指定的拒绝策略进行处理
3、所有的线程创建都是由指定的 factory 创建的。
3) 常见的4 种线程池
newCachedThreadPool
创建一个可缓存线程池,如果线程池长度超过处理需要,可灵活回收空闲线程,若无可回收,则新建线程
newFixedThreadPool
创建一个定长线程池,可控制线程最大并发数,超出的线程会在队列中等待。
newScheduledThreadPool
newSingleThreadExecutor
创建一个单线程化的线程池,它只会用唯一的工作线程来执行任务,保证所有任务按照指定顺序(FIFO, LIFO, 优先级)执行。
4) 开发中为什么使用线程池
降低资源的消耗
通过重复利用已经创建好的线程降低线程的创建和销毁带来的损耗
提高响应速度
因为线程池中的线程数没有超过线程池的最大上限时,有的线程处于等待分配任务的状态,当任务来时无需创建新的线程就能执行
提高线程的可管理性
线程池会根据当前系统特点对池内的线程进行优化处理,减少创建和销毁线程带来的系统开销。无限的创建和销毁线程不仅消耗系统资源,还降低系统的稳定性,使用线程池进行统一分配
2.CompletableFuture组合式异步编程 (1) runAsync 和 supplyAsync方法 CompletableFuture 提供了四个静态方法来创建一个异步操作。
1 2 3 4 public static CompletableFuture<Void> runAsync (Runnable runnable) public static CompletableFuture<Void> runAsync (Runnable runnable, Executor executor) public static <U> CompletableFuture<U> supplyAsync (Supplier<U> supplier) public static <U> CompletableFuture<U> supplyAsync (Supplier<U> supplier, Executor executor)
没有指定Executor的方法会使用ForkJoinPool.commonPool() 作为它的线程池执行异步代码。如果指定线程池,则使用指定的线程池运行。以下所有的方法都类同。
runAsync方法不支持返回值。
supplyAsync可以支持返回值。
(2) 计算结果完成时的回调方法 当CompletableFuture的计算结果完成,或者抛出异常的时候,可以执行特定的Action。主要是下面的方法:
1 2 3 4 5 6 public CompletableFuture<T> whenComplete (BiConsumer<? super T,? super Throwable> action) public CompletableFuture<T> whenCompleteAsync (BiConsumer<? super T,? super Throwable> action) public CompletableFuture<T> whenCompleteAsync (BiConsumer<? super T,? super Throwable> action, Executor executor) public CompletableFuture<T> exceptionally (Function<Throwable,? extends T> fn)
可以看到Action的类型是BiConsumer<? super T,? super Throwable>它可以处理正常的计算结果,或者异常情况。
whenComplete 可以处理正常和异常的计算结果,exceptionally 处理异常情况。 whenComplete 和 whenCompleteAsync 的区别:
whenComplete:是执行当前任务的线程执行继续执whenComplete 的任务。
whenCompleteAsync:是执行把 whenCompleteAsync 这个任务继续提交给线程池来进行执行。
方法不以 Async 结尾,意味着 Action 使用相同的线程执行,而 Async 可能会使用其他线程执行(如果是使用相同的线程池,也可能会被同一个线程选中执行)
(3) handle 方法 handle 是执行任务完成时对结果的处理。 handle 方法和 thenApply 方法处理方式基本一样。不同的是 handle 是在任务完成后再执行,还可以处理异常的任务。thenApply 只可以执行正常的任务,任务出现异常则不执行 thenApply 方法。
1 2 3 public <U> CompletionStage<U> handle (BiFunction<? super T, Throwable, ? extends U> fn) ;public <U> CompletionStage<U> handleAsync (BiFunction<? super T, Throwable, ? extends U> fn) ;public <U> CompletionStage<U> handleAsync (BiFunction<? super T, Throwable, ? extends U> fn,Execut
和 complete 一样,可对结果做最后的处理(可处理异常),可改变返回值。
(4) 线程串行化
thenApply 方法:当一个线程依赖另一个线程时,获取上一个任务返回的结果,并返回当前任务的返回值。
thenAccept 方法:消费处理结果。接收任务的处理结果,并消费处理,无返回结果。
thenRun 方法:只要上面的任务执行完成,就开始执行 thenRun,只是处理完任务后,执行thenRun 的后续操作
带有 Async 默认是异步执行的。同之前。
以上都要前置任务成功完成。 Function<? super T,? extends U>
T:上一个任务返回结果的类型
U:当前任务的返回值类型
thenRun:不能获取上一步的执行结果
thenAcceptAsync:能接受上一步结果,但是无返回值
thenApplyAsync:能接受上一步结果,有返回值
(5) 两任务组合 - 都要完成
两个任务必须都完成,触发该任务。
thenCombine:组合两个 future,获取两个 future 的返回结果,并返回当前任务的返回值thenAcceptBoth:组合两个 future,获取两个 future 任务的返回结果,然后处理任务,没有返回值。 runAfterBoth:组合两个 future,不需要获取 future 的结果,只需两个 future 处理完任务后,处理该任务。
(6) 两任务组合 - 一个完成
当两个任务中,任意一个 future 任务完成的时候,执行任务。
applyToEither:两个任务有一个执行完成,获取它的返回值,处理任务并有新的返回值。 acceptEither:两个任务有一个执行完成,获取它的返回值,处理任务,没有新的返回值。 runAfterEither:两个任务有一个执行完成,不需要获取 future 的结果,处理任务,也没有返回值。
(7) 多任务组合
allOf:等待所有任务完成
anyOf:只要有一个任务完成
商品详情 1. 模型抽取 模仿京东商品详情页,如下图所示,包括sku基本信息,图片信息,销售属性,图片介绍和规格参数
因此建立以下vo
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 @ToString @Data public class SkuItemVo { private SkuInfoEntity info; private boolean hasStock = true ; private List<SkuImagesEntity> images; private List<SkuItemSaleAttrVo> saleAttr; private SpuInfoDescEntity desc; private List<SpuItemAttrGroupVo> groupAttrs; } @Data @ToString public class SkuItemSaleAttrVo { private Long attrId; private String attrName; private List<AttrValueWithSkuIdVo> attrValues; } @Data @ToString public class SpuItemAttrGroupVo { private String groupName; private List<Attr> attrs; }
2. 封装商品属性 (1) 总体思路 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 @GetMapping("/{skuId}.html") public String skuItem (@PathVariable("skuId") Long skuId, Model model) { SkuItemVo skuItemVo=skuInfoService.item(skuId); model.addAttribute("item" , skuItemVo); return "item" ; } @Override public SkuItemVo item (Long skuId) { SkuItemVo skuItemVo = new SkuItemVo (); SkuInfoEntity skuInfoEntity = this .getById(skuId); skuItemVo.setInfo(skuInfoEntity); Long spuId = skuInfoEntity.getSpuId(); Long catalogId = skuInfoEntity.getCatalogId(); List<SkuImagesEntity> skuImagesEntities = skuImagesService.list(new QueryWrapper <SkuImagesEntity>().eq("sku_id" , skuId)); skuItemVo.setImages(skuImagesEntities); List<SkuItemSaleAttrVo> saleAttrVos=skuSaleAttrValueService.listSaleAttrs(spuId); skuItemVo.setSaleAttr(saleAttrVos); SpuInfoDescEntity byId = spuInfoDescService.getById(spuId); skuItemVo.setDesc(byId); List<SpuItemAttrGroupVo> spuItemAttrGroupVos=productAttrValueService.getProductGroupAttrsBySpuId(spuId, catalogId); skuItemVo.setGroupAttrs(spuItemAttrGroupVos); return skuItemVo; }
(2) 获取spu的销售属性 由于我们需要获取该spu下所有sku的销售属性,因此我们需要先从pms_sku_info
查出该spuId
对应的skuId
,
再在pms_sku_sale_attr_value
表中查出上述skuId
对应的属性
因此我们需要使用连表查询,并且通过分组将单个属性值对应的多个spuId
组成集合,效果如下
==为什么要设计成这种模式呢?==
因为这样可以在页面显示切换属性时,快速得到对应skuId的值,比如白色对应的sku_ids
为30,29,8+128GB对应的sku_ids
为29,31,27,那么销售属性为白色、8+128GB
的商品的skuId
则为二者的交集29
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <resultMap id ="SkuItemSaleAttrMap" type ="io.niceseason.gulimall.product.vo.SkuItemSaleAttrVo" > <result property ="attrId" column ="attr_id" /> <result property ="attrName" column ="attr_name" /> <collection property ="attrValues" ofType ="io.niceseason.gulimall.product.vo.AttrValueWithSkuIdVo" > <result property ="attrValue" column ="attr_value" /> <result property ="skuIds" column ="sku_ids" /> </collection > </resultMap > <select id ="listSaleAttrs" resultMap ="SkuItemSaleAttrMap" > SELECT attr_id,attr_name,attr_value,GROUP_CONCAT(info.sku_id) sku_ids FROM pms_sku_info info LEFT JOIN pms_sku_sale_attr_value ssav ON info.sku_id=ssav.sku_id WHERE info.spu_id=#{spuId} GROUP BY ssav.attr_id,ssav.attr_name,ssav.attr_value </select >
(3) 获取spu的规格参数信息 由于需要通过spuId
和catalogId
查询对应规格参数,所以我们需要通过pms_attr_group表
获得catalogId
和attrGroupName
然后通过 pms_attr_attrgroup_relation
获取分组对应属性id
再到 pms_product_attr_value
查询spuId对应的属性
最终sql效果,联表含有需要的所有属性
1 2 3 4 5 @Mapper public interface ProductAttrValueDao extends BaseMapper <ProductAttrValueEntity> { List<SpuItemAttrGroupVo> getProductGroupAttrsBySpuId (@Param("spuId") Long spuId, @Param("catalogId") Long catalogId) ; }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <resultMap id ="ProductGroupAttrsMap" type ="io.niceseason.gulimall.product.vo.SpuItemAttrGroupVo" > <result property ="groupName" column ="attr_group_name" /> <collection property ="attrs" ofType ="io.niceseason.gulimall.product.vo.Attr" > <result property ="attrId" column ="attr_id" /> <result property ="attrName" column ="attr_name" /> <result property ="attrValue" column ="attr_value" /> </collection > </resultMap > <select id ="getProductGroupAttrsBySpuId" resultMap ="ProductGroupAttrsMap" > SELECT ag.attr_group_name,attr.attr_id,attr.attr_name,attr.attr_value FROM pms_attr_attrgroup_relation aar LEFT JOIN pms_attr_group ag ON aar.attr_group_id=ag.attr_group_id LEFT JOIN pms_product_attr_value attr ON aar.attr_id=attr.attr_id WHERE attr.spu_id = #{spuId} AND ag.catelog_id = #{catalogId} </select >
3. 使用异步编排 为了使我们的任务进行的更快,我们可以让查询的各个子任务多线程执行,但是由于各个任务之间可能有相互依赖的关系,因此就涉及到了异步编排。
在这次查询中spu的销售属性、介绍、规格参数信息都需要spuId
,因此依赖sku基本信息的获取,所以我们要让这些任务在1之后运行。因为我们需要1运行的结果,因此调用thenAcceptAsync()
可以接受上一步的结果且没有返回值。
最后时,我们需要调用get()
方法使得所有方法都已经执行完成
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 public SkuItemVo item (Long skuId) { SkuItemVo skuItemVo = new SkuItemVo (); CompletableFuture<SkuInfoEntity> infoFuture = CompletableFuture.supplyAsync(() -> { SkuInfoEntity skuInfoEntity = this .getById(skuId); skuItemVo.setInfo(skuInfoEntity); return skuInfoEntity; }, executor); CompletableFuture<Void> imageFuture = CompletableFuture.runAsync(() -> { List<SkuImagesEntity> skuImagesEntities = skuImagesService.list(new QueryWrapper <SkuImagesEntity>().eq("sku_id" , skuId)); skuItemVo.setImages(skuImagesEntities); }, executor); CompletableFuture<Void> saleFuture = infoFuture.thenAcceptAsync((info) -> { List<SkuItemSaleAttrVo> saleAttrVos = skuSaleAttrValueService.listSaleAttrs(info.getSpuId()); skuItemVo.setSaleAttr(saleAttrVos); }, executor); CompletableFuture<Void> descFuture = infoFuture.thenAcceptAsync((info) -> { SpuInfoDescEntity byId = spuInfoDescService.getById(info.getSpuId()); skuItemVo.setDesc(byId); }, executor); CompletableFuture<Void> attrFuture = infoFuture.thenAcceptAsync((info) -> { List<SpuItemAttrGroupVo> spuItemAttrGroupVos=productAttrValueService.getProductGroupAttrsBySpuId(info.getSpuId(), info.getCatalogId()); skuItemVo.setGroupAttrs(spuItemAttrGroupVos); }, executor); try { CompletableFuture.allOf(imageFuture, saleFuture, descFuture, attrFuture).get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } return skuItemVo; }
4. 页面的sku切换 通过控制class中是否包换checked
属性来控制显示样式,因此要根据skuId
判断
1 2 3 4 5 6 7 8 <dd th:each ="val : ${attr.attrValues}" > <a th:attr =" class=${#lists.contains(#strings.listSplit(val.skuIds,','),item.info.skuId.toString()) ? 'sku_attr_value checked': 'sku_attr_value'}, skus=${val.skuIds} " > [[${val.attrValue}]] </a > </dd >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 $(".sku_attr_value" ).click (function ( ) { let curr = $(this ).attr ("skus" ).split ("," ); $(this ).parent ().parent ().find (".sku_attr_value" ).removeClass ("checked" ); $(this ).addClass ("checked" ); changeCheckedStyle (); let skus = new Array (); $("a[class='sku_attr_value checked']" ).each (function ( ) { skus.push ($(this ).attr ("skus" ).split ("," )); }); let filterEle = skus[0 ]; for (let i = 1 ; i < skus.length ; i++) { filterEle = $(filterEle).filter (skus[i])[0 ]; } location.href = "http://item.gulimall.com/" + filterEle + ".html" ; return false ; }); function changeCheckedStyle ( ) { $(".sku_attr_value" ).parent ().css ({"border" : "solid 1px #ccc" }); $("a[class='sku_attr_value checked']" ).parent ().css ({"border" : "solid 1px red" }); };
认证服务 1. 环境搭建 创建gulimall-auth-server
模块,导依赖,引入login.html
和reg.html
,并把静态资源放到nginx的static目录下
2. 注册功能 (1) 验证码倒计时 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 $("#sendCode" ).click (function ( ) { if ($(this ).hasClass ("disabled" )){ }else { timeOutChangeStyle (); var phone=$("#phoneNum" ).val (); $.get ("/sms/sendCode?phone=" +phone,function (data ){ if (data.code !=0 ){ alert (data.msg ); } }) } }) let time = 60 ; function timeOutChangeStyle ( ) { $("#sendCode" ).attr ("class" , "disabled" ); if (time==0 ){ $("#sendCode" ).text ("点击发送验证码" ); time=60 ; $("#sendCode" ).attr ("class" , "" ); }else { $("#sendCode" ).text (time+"s后再次发送" ); time--; setTimeout ("timeOutChangeStyle()" , 1000 ); } }
(2) 整合短信服务 在阿里云网页购买试用的短信服务
在gulimall-third-party
中编写发送短信组件,其中host
、path
、appcode
可以在配置文件中使用前缀spring.cloud.alicloud.sms
进行配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 @Data @ConfigurationProperties(prefix = "spring.cloud.alicloud.sms") @Controller public class SmsComponent { private String host; private String path; private String appcode; public void sendCode (String phone,String code) { String method = "POST" ; Map<String, String> headers = new HashMap <String, String>(); headers.put("Authorization" , "APPCODE " + appcode); Map<String, String> querys = new HashMap <String, String>(); querys.put("mobile" ,phone); querys.put("param" , "code:" +code); querys.put("tpl_id" , "TP1711063" ); Map<String, String> bodys = new HashMap <String, String>(); try { HttpResponse response = HttpUtils.doPost(host, path, method, headers, querys, bodys); System.out.println(response.toString()); } catch (Exception e) { e.printStackTrace(); } } }
编写controller,给别的服务提供远程调用发送验证码的接口
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 @Controller @RequestMapping(value = "/sms") public class SmsSendController { @Resource private SmsComponent smsComponent; @ResponseBody @GetMapping(value = "/sendCode") public R sendCode (@RequestParam("phone") String phone, @RequestParam("code") String code) { smsComponent.sendCode(phone,code); System.out.println(phone+code); return R.ok(); } }
(3) 接口防刷 由于发送验证码的接口暴露,为了防止恶意攻击,我们不能随意让接口被调用。
在redis中以phone-code
将电话号码和验证码进行存储并将当前时间与code一起存储
如果调用时以当前phone
取出的v不为空且当前时间在存储时间的60s以内,说明60s内该号码已经调用过,返回错误信息
60s以后再次调用,需要删除之前存储的phone-code
code存在一个过期时间,我们设置为10min,10min内验证该验证码有效
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 @GetMapping("/sms/sendCode") @ResponseBody public R sendCode (@RequestParam("phone") String phone) { ValueOperations<String, String> ops = redisTemplate.opsForValue(); String prePhone = AuthServerConstant.SMS_CODE_CACHE_PREFIX + phone; String v = ops.get(prePhone); if (!StringUtils.isEmpty(v)) { long pre = Long.parseLong(v.split("_" )[1 ]); if (System.currentTimeMillis() - pre < 60000 ) { return R.error(BizCodeEnum.SMS_CODE_EXCEPTION.getCode(), BizCodeEnum.SMS_CODE_EXCEPTION.getMsg()); } } redisTemplate.delete(prePhone); String code = String.valueOf((int )((Math.random() + 1 ) * 100000 )); ops.set(prePhone,code+"_" +System.currentTimeMillis(),10 , TimeUnit.MINUTES); thirdPartFeignService.sendCode(phone, code); return R.ok(); }
(4) 注册接口编写 在gulimall-auth-server
服务中编写注册的主体逻辑
若JSR303校验未通过,则通过BindingResult
封装错误信息,并重定向至注册页面
若通过JSR303校验,则需要从redis
中取值判断验证码是否正确,正确的话通过会员服务注册
会员服务调用成功则重定向至登录页,否则封装远程服务返回的错误信息返回至注册页面
注: RedirectAttributes
可以通过session保存信息并在重定向的时候携带过去
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 @PostMapping("/register") public String register (@Valid UserRegisterVo registerVo, BindingResult result, RedirectAttributes attributes) { Map<String, String> errors = new HashMap <>(); if (result.hasErrors()){ result.getFieldErrors().forEach(item->{ errors.put(item.getField(), item.getDefaultMessage()); attributes.addFlashAttribute("errors" , errors); }); return "redirect:http://auth.gulimall.com/reg.html" ; }else { String code = redisTemplate.opsForValue().get(AuthServerConstant.SMS_CODE_CACHE_PREFIX + registerVo.getPhone()); if (!StringUtils.isEmpty(code) && registerVo.getCode().equals(code.split("_" )[0 ])) { redisTemplate.delete(AuthServerConstant.SMS_CODE_CACHE_PREFIX + registerVo.getPhone()); R r = memberFeignService.register(registerVo); if (r.getCode() == 0 ) { return "redirect:http://auth.gulimall.com/login.html" ; }else { String msg = (String) r.get("msg" ); errors.put("msg" , msg); attributes.addFlashAttribute("errors" , errors); return "redirect:http://auth.gulimall.com/reg.html" ; } }else { errors.put("code" , "验证码错误" ); attributes.addFlashAttribute("errors" , errors); return "redirect:http://auth.gulimall.com/reg.html" ; } } }
通过gulimall-member
会员服务注册逻辑
通过异常机制判断当前注册会员名和电话号码是否已经注册,如果已经注册,则抛出对应的自定义异常,并在返回时封装对应的错误信息
如果没有注册,则封装传递过来的会员信息,并设置默认的会员等级、创建时间
1 2 3 4 5 6 7 8 9 10 11 12 13 @RequestMapping("/register") public R register (@RequestBody MemberRegisterVo registerVo) { try { memberService.register(registerVo); } catch (UserExistException userException) { return R.error(BizCodeEnum.USER_EXIST_EXCEPTION.getCode(), BizCodeEnum.USER_EXIST_EXCEPTION.getMsg()); } catch (PhoneNumExistException phoneException) { return R.error(BizCodeEnum.PHONE_EXIST_EXCEPTION.getCode(), BizCodeEnum.PHONE_EXIST_EXCEPTION.getMsg()); } return R.ok(); }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 public void register (MemberRegisterVo registerVo) { checkPhoneUnique(registerVo.getPhone()); checkUserNameUnique(registerVo.getUserName()); MemberEntity entity = new MemberEntity (); entity.setUsername(registerVo.getUserName()); entity.setMobile(registerVo.getPhone()); entity.setCreateTime(new Date ()); BCryptPasswordEncoder passwordEncoder = new BCryptPasswordEncoder (); String encodePassword = passwordEncoder.encode(registerVo.getPassword()); entity.setPassword(encodePassword); MemberLevelEntity defaultLevel = memberLevelService.getOne(new QueryWrapper <MemberLevelEntity>().eq("default_status" , 1 )); entity.setLevelId(defaultLevel.getId()); this .save(entity); } private void checkUserNameUnique (String userName) { Integer count = baseMapper.selectCount(new QueryWrapper <MemberEntity>().eq("username" , userName)); if (count > 0 ) { throw new UserExistException (); } } private void checkPhoneUnique (String phone) { Integer count = baseMapper.selectCount(new QueryWrapper <MemberEntity>().eq("mobile" , phone)); if (count > 0 ) { throw new PhoneNumExistException (); } }
3. 用户名密码登录 在gulimall-auth-server
模块中的主体逻辑
通过会员服务远程调用登录接口
如果调用成功,重定向至首页
如果调用失败,则封装错误信息并携带错误信息重定向至登录页
1 2 3 4 5 6 7 8 9 10 11 12 13 @RequestMapping("/login") public String login (UserLoginVo vo,RedirectAttributes attributes) { R r = memberFeignService.login(vo); if (r.getCode() == 0 ) { return "redirect:http://gulimall.com/" ; }else { String msg = (String) r.get("msg" ); Map<String, String> errors = new HashMap <>(); errors.put("msg" , msg); attributes.addFlashAttribute("errors" , errors); return "redirect:http://auth.gulimall.com/login.html" ; } }
在gulimall-member
模块中完成登录
当数据库中含有以当前登录名为用户名或电话号且密码匹配时,验证通过,返回查询到的实体
否则返回null,并在controller返回用户名或密码错误
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 @RequestMapping("/login") public R login (@RequestBody MemberLoginVo loginVo) { MemberEntity entity=memberService.login(loginVo); if (entity!=null ){ return R.ok(); }else { return R.error(BizCodeEnum.LOGINACCT_PASSWORD_EXCEPTION.getCode(), BizCodeEnum.LOGINACCT_PASSWORD_EXCEPTION.getMsg()); } } @Override public MemberEntity login (MemberLoginVo loginVo) { String loginAccount = loginVo.getLoginAccount(); MemberEntity entity = this .getOne(new QueryWrapper <MemberEntity>().eq("username" , loginAccount).or().eq("mobile" , loginAccount)); if (entity!=null ){ BCryptPasswordEncoder bCryptPasswordEncoder = new BCryptPasswordEncoder (); boolean matches = bCryptPasswordEncoder.matches(loginVo.getPassword(), entity.getPassword()); if (matches){ entity.setPassword("" ); return entity; } } return null ; }
4. 社交登录 (1) oauth2.0
(2) 在微博开放平台创建应用 (3) 在登录页引导用户至授权页 1 2 GET https:// api.weibo.com/oauth2/ authorize?client_id=YOUR_CLIENT_ID&response_type=code&redirect_uri=YOUR_REGISTERED_REDIRECT_URI
client_id
: 创建网站应用时的app key
YOUR_REGISTERED_REDIRECT_URI
: 认证完成后的跳转链接(需要和平台高级设置一致)
如果用户同意授权,页面跳转至 YOUR_REGISTERED_REDIRECT_URI/?code=CODE
code是我们用来换取令牌的参数
(4) 换取token 1 2 POST https:// api.weibo.com/oauth2/ access_token?client_id=YOUR_CLIENT_ID&client_secret=YOUR_CLIENT_SECRET&grant_type=authorization_code&redirect_uri=YOUR_REGISTERED_REDIRECT_URI&code=CODE
client_id
: 创建网站应用时的app key
client_secret
: 创建网站应用时的app secret
YOUR_REGISTERED_REDIRECT_URI
: 认证完成后的跳转链接(需要和平台高级设置一致)
code
:换取令牌的认证码
返回数据如下
(5) 获取用户信息 https://open.weibo.com/wiki/2/users/show
结果返回json
(6) 代码编写 认证接口
通过HttpUtils
发送请求获取token
,并将token
等信息交给member
服务进行社交登录
若获取token
失败或远程调用服务失败,则封装错误信息重新转回登录页
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 @Controller public class OauthController { @Autowired private MemberFeignService memberFeignService; @RequestMapping("/oauth2.0/weibo/success") public String authorize (String code, RedirectAttributes attributes) throws Exception { Map<String, String> query = new HashMap <>(); query.put("client_id" , "2144***074" ); query.put("client_secret" , "ff63a0d8d5*****29a19492817316ab" ); query.put("grant_type" , "authorization_code" ); query.put("redirect_uri" , "http://auth.gulimall.com/oauth2.0/weibo/success" ); query.put("code" , code); HttpResponse response = HttpUtils.doPost("https://api.weibo.com" , "/oauth2/access_token" , "post" , new HashMap <String, String>(), query, new HashMap <String, String>()); Map<String, String> errors = new HashMap <>(); if (response.getStatusLine().getStatusCode() == 200 ) { String json = EntityUtils.toString(response.getEntity()); SocialUser socialUser = JSON.parseObject(json, new TypeReference <SocialUser>() { }); R login = memberFeignService.login(socialUser); if (login.getCode() == 0 ) { String jsonString = JSON.toJSONString(login.get("memberEntity" )); MemberResponseVo memberResponseVo = JSON.parseObject(jsonString, new TypeReference <MemberResponseVo>() { }); attributes.addFlashAttribute("user" , memberResponseVo); return "redirect:http://gulimall.com" ; }else { errors.put("msg" , "登录失败,请重试" ); attributes.addFlashAttribute("errors" , errors); return "redirect:http://auth.gulimall.com/login.html" ; } }else { errors.put("msg" , "获得第三方授权失败,请重试" ); attributes.addFlashAttribute("errors" , errors); return "redirect:http://auth.gulimall.com/login.html" ; } }
登录接口
登录包含两种流程,实际上包括了注册和登录
如果之前未使用该社交账号登录,则使用token
调用开放api获取社交账号相关信息,注册并将结果返回
如果之前已经使用该社交账号登录,则更新token
并将结果返回
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 @RequestMapping("/oauth2/login") public R login (@RequestBody SocialUser socialUser) { MemberEntity entity=memberService.login(socialUser); if (entity!=null ){ return R.ok().put("memberEntity" ,entity); }else { return R.error(); } } @Override public MemberEntity login (SocialUser socialUser) { MemberEntity uid = this .getOne(new QueryWrapper <MemberEntity>().eq("uid" , socialUser.getUid())); if (uid == null ) { Map<String, String> query = new HashMap <>(); query.put("access_token" ,socialUser.getAccess_token()); query.put("uid" , socialUser.getUid()); String json = null ; try { HttpResponse response = HttpUtils.doGet("https://api.weibo.com" , "/2/users/show.json" , "get" , new HashMap <>(), query); json = EntityUtils.toString(response.getEntity()); } catch (Exception e) { e.printStackTrace(); } JSONObject jsonObject = JSON.parseObject(json); String name = jsonObject.getString("name" ); String gender = jsonObject.getString("gender" ); String profile_image_url = jsonObject.getString("profile_image_url" ); uid = new MemberEntity (); MemberLevelEntity defaultLevel = memberLevelService.getOne(new QueryWrapper <MemberLevelEntity>().eq("default_status" , 1 )); uid.setLevelId(defaultLevel.getId()); uid.setNickname(name); uid.setGender("m" .equals(gender)?0 :1 ); uid.setHeader(profile_image_url); uid.setAccessToken(socialUser.getAccess_token()); uid.setUid(socialUser.getUid()); uid.setExpiresIn(socialUser.getExpires_in()); this .save(uid); }else { uid.setAccessToken(socialUser.getAccess_token()); uid.setUid(socialUser.getUid()); uid.setExpiresIn(socialUser.getExpires_in()); this .updateById(uid); } return uid; }
5. SpringSession (1) session 原理 jsessionid
相当于银行卡,存在服务器的session
相当于存储的现金,每次通过jsessionid
取出保存的数据
问题:但是正常情况下session
不可跨域,它有自己的作用范围
(2) 分布式下session共享问题
(3) 解决方案 1) session复制
2) 客户端存储
3) hash一致性
4) 统一存储
(4) SpringSession整合redis 通过SpringSession
修改session
的作用域
1) 环境搭建 导入依赖
1 2 3 4 5 6 7 8 <dependency > <groupId > org.springframework.session</groupId > <artifactId > spring-session-data-redis</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-data-redis</artifactId > </dependency >
修改配置
1 2 3 4 5 spring: redis: host: 192.168 .56 .10 session: store-type: redis
添加注解
1 2 @EnableRedisHttpSession public class GulimallAuthServerApplication {
2) 自定义配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 @Configuration public class GulimallSessionConfig { @Bean public RedisSerializer<Object> springSessionDefaultRedisSerializer () { return new GenericJackson2JsonRedisSerializer (); } @Bean public CookieSerializer cookieSerializer () { DefaultCookieSerializer serializer = new DefaultCookieSerializer (); serializer.setCookieName("GULISESSIONID" ); serializer.setDomainName("gulimall.com" ); return serializer; } }
(5) SpringSession核心原理 - 装饰者模式
原生的获取session
时是通过HttpServletRequest
获取的
这里对request进行包装,并且重写了包装request的getSession()
方法
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 @Override protected void doFilterInternal (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { request.setAttribute(SESSION_REPOSITORY_ATTR, this .sessionRepository); SessionRepositoryRequestWrapper wrappedRequest = new SessionRepositoryRequestWrapper ( request, response, this .servletContext); SessionRepositoryResponseWrapper wrappedResponse = new SessionRepositoryResponseWrapper ( wrappedRequest, response); try { filterChain.doFilter(wrappedRequest, wrappedResponse); } finally { wrappedRequest.commitSession(); } }
购物车 1. 数据模型分析 (1) 数据存储 购物车是一个读多写多的场景,因此放入数据库并不合适,但购物车又是需要持久化,因此这里我们选用redis存储购物车数据。
(2) 数据结构 一个购物车是由各个购物项组成的,但是我们用List
进行存储并不合适,因为使用List
查找某个购物项时需要挨个遍历每个购物项,会造成大量时间损耗,为保证查找速度,我们使用hash
进行存储
(3) VO编写 购物项vo
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 public class CartItemVo { private Long skuId; private Boolean check = true ; private String title; private String image; private List<String> skuAttrValues; private BigDecimal price; private Integer count; private BigDecimal totalPrice; public BigDecimal getTotalPrice () { return price.multiply(new BigDecimal (count)); } public void setTotalPrice (BigDecimal totalPrice) { this .totalPrice = totalPrice; }
购物车vo
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 public class CartVo { List<CartItemVo> items; private Integer countNum; private Integer countType; private BigDecimal totalAmount; private BigDecimal reduce = new BigDecimal ("0.00" ); public List<CartItemVo> getItems () { return items; } public void setItems (List<CartItemVo> items) { this .items = items; } public Integer getCountNum () { int count=0 ; if (items != null && items.size() > 0 ) { for (CartItemVo item : items) { count += item.getCount(); } } return count; } public void setCountNum (Integer countNum) { this .countNum = countNum; } public Integer getCountType () { int count=0 ; if (items != null && items.size() > 0 ) { for (CartItemVo item : items) { count += 1 ; } } return count; } public void setCountType (Integer countType) { this .countType = countType; } public BigDecimal getTotalAmount () { BigDecimal total = new BigDecimal (0 ); if (items != null && items.size() > 0 ) { for (CartItemVo item : items) { total.add(item.getTotalPrice()); } } total.subtract(reduce); return total; } public void setTotalAmount (BigDecimal totalAmount) { this .totalAmount = totalAmount; } public BigDecimal getReduce () { return reduce; } public void setReduce (BigDecimal reduce) { this .reduce = reduce; } }
2. ThreadLocal用户身份鉴别 (1) 用户身份鉴别方式 参考京东,在点击购物车时,会为临时用户生成一个name
为user-key
的cookie
临时标识,过期时间为一个月,如果手动清除user-key
,那么临时购物车的购物项也被清除,所以user-key
是用来标识和存储临时购物车数据的
(2) 使用ThreadLocal进行用户身份鉴别信息传递
在调用购物车的接口前,先通过session信息判断是否登录,并分别进行用户身份信息的封装,并把user-key
放在cookie中
这个功能使用拦截器进行完成
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 @Configuration public class GulimallWebConfig implements WebMvcConfigurer { @Override public void addInterceptors (InterceptorRegistry registry) { registry.addInterceptor(new CartInterceptor ()).addPathPatterns("/**" ); } } public class CartInterceptor implements HandlerInterceptor { public static ThreadLocal<UserInfoTo> threadLocal=new ThreadLocal <>(); @Override public boolean preHandle (HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { HttpSession session = request.getSession(); MemberResponseVo memberResponseVo = (MemberResponseVo) session.getAttribute(AuthServerConstant.LOGIN_USER); UserInfoTo userInfoTo = new UserInfoTo (); if (memberResponseVo!=null ){ userInfoTo.setUserId(memberResponseVo.getId()); } Cookie[] cookies = request.getCookies(); for (Cookie cookie : cookies) { if (cookie.getName().equals(CartConstant.TEMP_USER_COOKIE_NAME)) { userInfoTo.setUserKey(cookie.getValue()); userInfoTo.setTempUser(true ); } } if (StringUtils.isEmpty(userInfoTo.getUserKey())) { String uuid = UUID.randomUUID().toString(); userInfoTo.setUserKey(uuid); } threadLocal.set(userInfoTo); return true ; } @Override public void postHandle (HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception { UserInfoTo userInfoTo = threadLocal.get(); if (!userInfoTo.getTempUser()) { Cookie cookie = new Cookie (CartConstant.TEMP_USER_COOKIE_NAME, userInfoTo.getUserKey()); cookie.setDomain("gulimall.com" ); cookie.setMaxAge(CartConstant.TEMP_USER_COOKIE_TIMEOUT); response.addCookie(cookie); } } }
3. 添加商品到购物车 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 @RequestMapping("/addCartItem") public String addCartItem (@RequestParam("skuId") Long skuId, @RequestParam("num") Integer num, RedirectAttributes attributes) { cartService.addCartItem(skuId, num); attributes.addAttribute("skuId" , skuId); return "redirect:http://cart.gulimall.com/addCartItemSuccess" ; } @RequestMapping("/addCartItemSuccess") public String addCartItemSuccess (@RequestParam("skuId") Long skuId,Model model) { CartItemVo cartItemVo = cartService.getCartItem(skuId); model.addAttribute("cartItem" , cartItemVo); return "success" ; }
若当前商品已经存在购物车,只需增添数量
否则需要查询商品购物项所需信息,并添加新商品至购物车
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 public CartItemVo addCartItem (Long skuId, Integer num) { BoundHashOperations<String, Object, Object> ops = getCartItemOps(); String cartJson = (String) ops.get(skuId.toString()); if (!StringUtils.isEmpty(cartJson)) { CartItemVo cartItemVo = JSON.parseObject(cartJson, CartItemVo.class); cartItemVo.setCount(cartItemVo.getCount() + num); String jsonString = JSON.toJSONString(cartItemVo); ops.put(skuId.toString(), jsonString); return cartItemVo; } else { CartItemVo cartItemVo = new CartItemVo (); CompletableFuture<Void> future1 = CompletableFuture.runAsync(() -> { R info = productFeignService.info(skuId); SkuInfoVo skuInfo = info.getData("skuInfo" , new TypeReference <SkuInfoVo>() { }); cartItemVo.setCheck(true ); cartItemVo.setCount(num); cartItemVo.setImage(skuInfo.getSkuDefaultImg()); cartItemVo.setPrice(skuInfo.getPrice()); cartItemVo.setSkuId(skuId); cartItemVo.setTitle(skuInfo.getSkuTitle()); }, executor); CompletableFuture<Void> future2 = CompletableFuture.runAsync(() -> { List<String> attrValuesAsString = productFeignService.getSkuSaleAttrValuesAsString(skuId); cartItemVo.setSkuAttrValues(attrValuesAsString); }, executor); try { CompletableFuture.allOf(future1, future2).get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } String toJSONString = JSON.toJSONString(cartItemVo); ops.put(skuId.toString(), toJSONString); return cartItemVo; } }
4. 获取购物车
若用户未登录,则直接使用user-key
获取购物车数据
否则使用userId
获取购物车数据,并将user-key
对应临时购物车数据与用户购物车数据合并,并删除临时购物车
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 @RequestMapping("/cart.html") public String getCartList (Model model) { CartVo cartVo=cartService.getCart(); model.addAttribute("cart" , cartVo); return "cartList" ; } @Override public CartVo getCart () { CartVo cartVo = new CartVo (); UserInfoTo userInfoTo = CartInterceptor.threadLocal.get(); List<CartItemVo> tempCart = getCartByKey(CartConstant.CART_PREFIX + userInfoTo.getUserKey()); if (StringUtils.isEmpty(userInfoTo.getUserId())) { List<CartItemVo> cartItemVos = tempCart; cartVo.setItems(cartItemVos); }else { List<CartItemVo> userCart = getCartByKey(CartConstant.CART_PREFIX + userInfoTo.getUserId()); if (tempCart!=null &&tempCart.size()>0 ){ BoundHashOperations<String, Object, Object> ops = redisTemplate.boundHashOps(CartConstant.CART_PREFIX + userInfoTo.getUserId()); for (CartItemVo cartItemVo : tempCart) { userCart.add(cartItemVo); addCartItem(cartItemVo.getSkuId(), cartItemVo.getCount()); } } cartVo.setItems(userCart); redisTemplate.delete(CartConstant.CART_PREFIX + userInfoTo.getUserKey()); } return cartVo; }
5. 选中购物车项 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 @RequestMapping("/checkCart") public String checkCart (@RequestParam("isChecked") Integer isChecked,@RequestParam("skuId") Long skuId) { cartService.checkCart(skuId, isChecked); return "redirect:http://cart.gulimall.com/cart.html" ; } @Override public void checkCart (Long skuId, Integer isChecked) { BoundHashOperations<String, Object, Object> ops = getCartItemOps(); String cartJson = (String) ops.get(skuId.toString()); CartItemVo cartItemVo = JSON.parseObject(cartJson, CartItemVo.class); cartItemVo.setCheck(isChecked==1 ); ops.put(skuId.toString(),JSON.toJSONString(cartItemVo)); }
6. 修改购物项数量 1 2 3 4 5 6 7 8 9 10 11 12 13 14 @RequestMapping("/countItem") public String changeItemCount (@RequestParam("skuId") Long skuId, @RequestParam("num") Integer num) { cartService.changeItemCount(skuId, num); return "redirect:http://cart.gulimall.com/cart.html" ; } @Override public void changeItemCount (Long skuId, Integer num) { BoundHashOperations<String, Object, Object> ops = getCartItemOps(); String cartJson = (String) ops.get(skuId.toString()); CartItemVo cartItemVo = JSON.parseObject(cartJson, CartItemVo.class); cartItemVo.setCount(num); ops.put(skuId.toString(),JSON.toJSONString(cartItemVo)); }
7. 删除购物车项 1 2 3 4 5 6 7 8 9 10 11 @RequestMapping("/deleteItem") public String deleteItem (@RequestParam("skuId") Long skuId) { cartService.deleteItem(skuId); return "redirect:http://cart.gulimall.com/cart.html" ; } @Override public void deleteItem (Long skuId) { BoundHashOperations<String, Object, Object> ops = getCartItemOps(); ops.delete(skuId.toString()); }
消息队列 一、消息简介 消息代理规范
JMS(Java Message Service)JAVA消息服务
基于JVM消息代理的规范。ActiveMQ、HornetMQ是JMS实现
AMQP(Advanced Message Queuing Protocol)
高级消息队列协议,也是一个消息代理的规范,兼容JMS
RabbitMQ是AMQP的实现
作用
通过消息服务中间件来提升系统异步通信、扩展解耦能力
当消息发送者发送消息以后,将由消息代理接管,消息代理保证消息传递到指定目的地
应用场景
异步处理
用户注册操作和消息处理并行,提高响应速度
应用解耦
在下单时库存系统不能正常使用。也不影响正常下单,因为下单后,订单系统写入消息队列就不再关心其他的后续操作了。实现订单系统与库存系统的应用解耦
流量削峰
用户的请求,服务器接收后,首先写入消息队列。假如消息队列长度超过最大数量,则直接抛弃用户请求或跳转到错误页面
秒杀业务根据消息队列中的请求信息,再做后续处理
二、RabbitMQ RabbitMQ是一个由erlang开发的AMQP(Advanved Message Queue Protocol)的开源实现。
1. 核心概念
Message
消息,消息是不具名的,它由消息头和消息体组成
消息头,包括routing-key(路由键)、priority(相对于其他消息的优先权)、delivery-mode(指出该消息可能需要持久性存储)等
Publisher
消息的生产者,也是一个向交换器发布消息的客户端应用程序
Exchange
交换器,将生产者消息路由给服务器中的队列
类型有direct(默认),fanout, topic, 和headers,具有不同转发策略
Queue
Binding
Connection
Consumer
消息的消费者,表示一个从消息队列中取得消息的客户端应用程序
Virtual Host
虚拟主机,表示一批交换器、消息队列和相关对象。
vhost 是 AMQP 概念的基础,必须在连接时指定
RabbitMQ 默认的 vhost 是 /
Broker
2. 运行机制 消息路由
AMQP 中增加了Exchange 和 Binding 的角色, Binding 决定交换器的消息应该发送到那个队列
Exchange 类型
direct
点对点模式,消息中的路由键(routing key)如果和 Binding 中的 binding key 一致, 交换器就将消息发到对应的队列中。
fanout
广播模式,每个发到 fanout 类型交换器的消息都会分到所有绑定的队列上去
topic
将路由键和某个模式进行匹配,此时队列需要绑定到一个模式上。它将路由键和绑定键的字符串切分成单词,这些单词之间用点隔开。 识别通配符: # 匹配 0 个或多个单词, *匹配一个单词
三、Docker安装RabbitMQ 1 2 3 4 5 6 7 8 docker run -d --name rabbitmq -p 5671:5671 -p 5672:5672 -p 4369:4369 -p 25672:25672 -p 15671:15671 -p 15672:15672 rabbitmq:management 4369, 25672 (Erlang发现&集群端口) 5672, 5671 (AMQP端口) 15672 (web管理后台端口) 61613, 61614 (STOMP协议端口) 1883, 8883 (MQTT协议端口) https://www.rabbitmq.com/networking.html
四、 Springboot中的RabbitMQ 1. 环境准备 在docker中安装rabbitmq并运行
1 2 # 5672为服务端口,15672为web控制台端口 docker run -d -p 5672:5672 -p 15672:15672 38e57f281891
如果要修改账号密码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 1、进入docker 的 RabbitMQ 容器中 docker exec -it rabbitmq01 bash 2、查看用户 rabbitmqctl list_users 3、修改密码 rabbitmqctl change_password userName newPassword 4、如果不想要guest的账号也可以新增账号 rabbitmqctl add_user userName newPassword 5、看guest不爽,你还可以delete它 rabbitmqctl delete_user guest 6、最后别忘了给自己添加的账号增加超级管理员权限 rabbitmqctl set_user_tags userName administrator
导入依赖
1 2 3 4 5 6 7 8 9 <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-amqp</artifactId > </dependency > <dependency > <groupId > com.fasterxml.jackson.core</groupId > <artifactId > jackson-databind</artifactId > </dependency >
配置文件
1 2 3 4 spring.rabbitmq.host =192.168.138.*
2. RabbitMQ客户端API RabbitAutoConfiguration中有内部类RabbitTemplateConfiguration,在该类中向容器中分别导入了RabbitTemplate 和AmqpAdmin
在测试类中分别注入
1 2 3 4 5 @Autowired private RabbitTemplate rabbitTemplate;@Autowired private AmqpAdmin amqpAdmin;
RabbitTemplate消息发送处理组件
可用来发送和接收消息
1 2 3 4 5 6 7 8 9 10 11 12 rabbitTemplate.convertAndSend("amq.direct" ,"ustc" ,"aaaa" ); Book book = new Book (); book.setName("西游记" ); book.setPrice(23.2f ); rabbitTemplate.convertAndSend("amq.direct" ,"ustc" ,book); Object o = rabbitTemplate.receiveAndConvert("ustc" ); System.out.println(o.getClass()); System.out.println(o);
默认的消息转化器是SimpleMessageConverter,对于对象以jdk序列化方式存储,若要以Json方式存储对象,就要自定义消息转换器
1 2 3 4 5 6 7 8 @Configuration public class AmqpConfig { @Bean public MessageConverter messageConverter () { return new Jackson2JsonMessageConverter (); } }
1 2 3 4 5 6 amqpAdmin.declareExchange(new DirectExchange ("admin.direct" )); amqpAdmin.declareQueue(new Queue ("admin.test" )); amqpAdmin.declareBinding(new Binding ("admin.test" , Binding.DestinationType.QUEUE,"admin.direct" ,"admin.test" ,null ));
消息的监听
在回调方法上标注@RabbitListener注解,并设置其属性queues,注册监听队列,当该队列收到消息时,标注方法遍会调用
可分别使用Message和保存消息所属对象进行消息接收,若使用Object对象进行消息接收,实际上接收到的也是Message
如果知道接收的消息是何种类型的对象,可在方法参数中直接加上该类型参数,也可接收到
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 @Service public class BookService { @RabbitListener(queues = {"admin.test"}) public void receive1 (Book book) { System.out.println("收到消息:" +book); } @RabbitListener(queues = {"admin.test"}) public void receive1 (Object object) { System.out.println("收到消息:" +object.getClass()); } @RabbitListener(queues = {"admin.test"}) public void receive2 (Message message) { System.out.println("收到消息" +message.getHeaders()+"---" +message.getPayload()); } @RabbitListener(queues = {"admin.test"}) public void receive3 (Message message,Book book) { System.out.println("3收到消息:book:" +book.getClass()+"\n" + "message:" +message.getClass()); } }
若消息中含有不同的对象,可以使用@RabbitHandler
进行分别接收
1 2 3 4 5 6 7 8 9 10 11 12 13 @RabbitListener(queues = {"admin.test"}) @Service public class BookService { @RabbitHandler public void receive4 (Book book) { System.out.println("4收到消息:book:" + book); } @RabbitHandler public void receive5 (Student student) { System.out.println("5收到消息:student:" + student); }
3. 消息的可靠投递 为保证消息不丢失,可靠抵达,可以使用事务消息,但性能下降250倍,为此引入确认机制
publisher confirmCallback 确认模式
publisher returnCallback 未投递到queue 退回模式(失败时触发回调)
consumer ack(Acknowledgement)机制
(1) confirmCallback spring.rabbitmq.publisher-confirms=true
在创建 connectionFactory 的时候设置 PublisherConfirms(true) 选项,开启
confirmcallback 。
CorrelationData:用来表示当前消息唯一性。
消息只要被 broker 接收到就会执行 confirmCallback,如果是 cluster 模式,需要所有broker 接收到才会调用 confirmCallback。
被 broker 接收到只能表示 message 已经到达服务器,并不能保证消息一定会被投递 到目标 queue 里。所以需要用到接下来的 returnCallback 。
CorrelationData
为消息的唯一标识,在发送消息时进行构建
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 @Configuration public class AmqpConfig { @Autowired private RabbitTemplate rabbitTemplate; @Bean public MessageConverter messageConverter () { return new Jackson2JsonMessageConverter (); } @PostConstruct public void initRabbitTemplate () { rabbitTemplate.setConfirmCallback(new RabbitTemplate .ConfirmCallback() { @Override public void confirm (CorrelationData correlationData, boolean ack, String cause) { System.out.println("confirm CorrelationData:" +correlationData+"===>ack:" +ack+"====>cause:" +cause); } }); } }
(2) ReturnCallback spring.rabbitmq.publisher-returns=true
spring.rabbitmq.template.mandatory=true
1 2 3 4 spring.rabbitmq.publisher-returns =true spring.rabbitmq.template.mandatory =true
1 2 3 4 5 6 rabbitTemplate.setReturnCallback(new RabbitTemplate .ReturnCallback() { @Override public void returnedMessage (Message message, int replyCode, String replyText, String exchange, String routingKey) { System.out.println("return callback...message:" +message+"===>replycode:" +replyCode+"===>replyText:" +replyText+"===>exchange:" +exchange+"===>routingKey:" +routingKey); } });
1 2 3 4 5 6 return callback.. .message:(Body:'{"name":"水浒传","price":0.0}' MessageProperties [headers={spring_returned_message_correlation =8f5d080c-35c8-42db-ac3d-0bf7509906aa, __TypeId__ =cn.edu.ustc.springboot.bean.Book}, contentType =application/json, contentEncoding =UTF-8, contentLength =0, receivedDeliveryMode =PERSISTENT, priority =0, deliveryTag =0])===>replycode:312 ===>replyText:NO_ROUTE===>exchange:admin.direct===>routingKey:admin.test0 confirm CorrelationData:CorrelationData [id =8f5d080c-35c8-42db-ac3d-0bf7509906aa]===>ack:true ====>causenull return callback.. .message:(Body:'{"name":"mhs","age":1}' MessageProperties [headers={spring_returned_message_correlation =2961a45c-19ee-4b94-8281-03e00fbdceea, __TypeId__ =cn.edu.ustc.springboot.bean.Student}, contentType =application/json, contentEncoding =UTF-8, contentLength =0, receivedDeliveryMode =PERSISTENT, priority =0, deliveryTag =0])===>replycode:312 ===>replyText:NO_ROUTE===>exchange:admin.direct===>routingKey:admin.test11
(3) ack 消费者获取到消息,成功处理,可以回复Ack给Broker
basic.ack用于肯定确认;broker将移除此消息
basic.nack用于否定确认;可以指定broker是否丢弃此消息,可以批量
basic.reject用于否定确认;同上,但不能批量
默认自动ack,消息被消费者收到,就会从broker的queue中移除
queue无消费者,消息依然会被存储,直到消费者消费
消费者收到消息,默认会自动ack。但是如果无法确定此消息是否被处理完成, 或者成功处理。我们可以开启手动ack模式
消息处理成功,ack(),接受下一个消息,此消息broker就会移除
消息处理失败,nack()/reject(),重新发送给其他人进行处理,或者容错处理后ack
消息一直没有调用ack/nack方法,broker认为此消息正在被处理,不会投递给别人,此时客户端断开,消息不会被broker移除,会投递给别人
在默认情况下,消息如果消费到一半,服务器宕机,剩下的消息就会默认全部确认,会造成消息丢失,因此需要引入手动确认模式
1 2 spring.rabbitmq.listener.simple.acknowledge-mode =manual
只要没有明确通知服务器ack,消息就不会确认收货,可以通过basicAck()
进行确认收货
1 2 3 4 5 6 7 8 9 10 11 12 13 14 @RabbitHandler public void receive4 (Book book, Message message,Channel channel) throws IOException { try { Thread.sleep(100 ); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("4收到消息:book:" + book); long deliveryTag = message.getMessageProperties().getDeliveryTag(); System.out.println(deliveryTag); channel.basicAck(deliveryTag,false ); }
此外还可以使用basicNack()
和basicReject()
进行拒绝收货
订单服务 1. 订单流程 订单生成 -> 支付订单 -> 卖家发货 -> 确认收货 -> 交易成功
2. 订单登录拦截 因为订单系统必然涉及到用户信息,因此进入订单系统的请求必须是已经登录的,所以我们需要通过拦截器对未登录订单请求进行拦截
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 public class LoginInterceptor implements HandlerInterceptor { public static ThreadLocal<MemberResponseVo> loginUser = new ThreadLocal <>(); @Override public boolean preHandle (HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { HttpSession session = request.getSession(); MemberResponseVo memberResponseVo = (MemberResponseVo) session.getAttribute(AuthServerConstant.LOGIN_USER); if (memberResponseVo != null ) { loginUser.set(memberResponseVo); return true ; }else { session.setAttribute("msg" ,"请先登录" ); response.sendRedirect("http://auth.gulimall.com/login.html" ); return false ; } } } @Configuration public class GulimallWebConfig implements WebMvcConfigurer { @Override public void addInterceptors (InterceptorRegistry registry) { registry.addInterceptor(new LoginInterceptor ()).addPathPatterns("/**" ); } }
3. 订单确认页 (1)模型抽取 跳转到确认页时需要携带的数据模型
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 public class OrderConfirmVo { @Getter @Setter private List<MemberAddressVo> memberAddressVos; @Getter @Setter private List<OrderItemVo> items; @Getter @Setter private Integer integration; @Getter @Setter private String orderToken; @Getter @Setter Map<Long,Boolean> stocks; public Integer getCount () { Integer count = 0 ; if (items != null && items.size() > 0 ) { for (OrderItemVo item : items) { count += item.getCount(); } } return count; } public BigDecimal getTotal () { BigDecimal totalNum = BigDecimal.ZERO; if (items != null && items.size() > 0 ) { for (OrderItemVo item : items) { BigDecimal itemPrice = item.getPrice().multiply(new BigDecimal (item.getCount().toString())); totalNum = totalNum.add(itemPrice); } } return totalNum; } public BigDecimal getPayPrice () { return getTotal(); } }
(2)数据获取
查询购物项、库存和收货地址都要调用远程服务,串行会浪费大量时间,因此我们使用CompletableFuture
进行异步编排
可能由于延迟,订单提交按钮可能被点击多次,为了防止重复提交的问题,我们在返回订单确认页时,在redis
中生成一个随机的令牌,过期时间为30min,提交的订单会携带这个令牌,我们将会在订单提交的处理页面核验此令牌
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 @RequestMapping("/toTrade") public String toTrade (Model model) { OrderConfirmVo confirmVo = orderService.confirmOrder(); model.addAttribute("confirmOrder" , confirmVo); return "confirm" ; } @Override public OrderConfirmVo confirmOrder () { MemberResponseVo memberResponseVo = LoginInterceptor.loginUser.get(); OrderConfirmVo confirmVo = new OrderConfirmVo (); RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes(); CompletableFuture<Void> itemAndStockFuture = CompletableFuture.supplyAsync(() -> { RequestContextHolder.setRequestAttributes(requestAttributes); List<OrderItemVo> checkedItems = cartFeignService.getCheckedItems(); confirmVo.setItems(checkedItems); return checkedItems; }, executor).thenAcceptAsync((items) -> { List<Long> skuIds = items.stream().map(OrderItemVo::getSkuId).collect(Collectors.toList()); Map<Long, Boolean> hasStockMap = wareFeignService.getSkuHasStocks(skuIds).stream().collect(Collectors.toMap(SkuHasStockVo::getSkuId, SkuHasStockVo::getHasStock)); confirmVo.setStocks(hasStockMap); }, executor); CompletableFuture<Void> addressFuture = CompletableFuture.runAsync(() -> { List<MemberAddressVo> addressByUserId = memberFeignService.getAddressByUserId(memberResponseVo.getId()); confirmVo.setMemberAddressVos(addressByUserId); }, executor); confirmVo.setIntegration(memberResponseVo.getIntegration()); String token = UUID.randomUUID().toString().replace("-" , "" ); redisTemplate.opsForValue().set(OrderConstant.USER_ORDER_TOKEN_PREFIX + memberResponseVo.getId(), token, 30 , TimeUnit.MINUTES); confirmVo.setOrderToken(token); try { CompletableFuture.allOf(itemAndStockFuture, addressFuture).get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } return confirmVo; }
(3)Feign远程调用丢失请求头问题 feign
远程调用的请求头中没有含有JSESSIONID
的cookie
,所以也就不能得到服务端的session
数据,cart认为没登录,获取不了用户信息
1 2 3 4 5 6 Request targetRequest (RequestTemplate template) { for (RequestInterceptor interceptor : requestInterceptors) { interceptor.apply(template); } return target.apply(template); }
但是在feign
的调用过程中,会使用容器中的RequestInterceptor
对RequestTemplate
进行处理,因此我们可以通过向容器中导入定制的RequestInterceptor
为请求加上cookie
。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public class GuliFeignConfig { @Bean public RequestInterceptor requestInterceptor () { return new RequestInterceptor () { @Override public void apply (RequestTemplate template) { ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes(); if (requestAttributes != null ) { HttpServletRequest request = requestAttributes.getRequest(); if (request != null ) { String cookie = request.getHeader("Cookie" ); template.header("Cookie" , cookie); } } } }; } }
RequestContextHolder
为SpingMVC中共享request
数据的上下文,底层由ThreadLocal
实现
经过RequestInterceptor
处理后的请求如下,已经加上了请求头的Cookie
信息
(4)Feign异步情况丢失上下文问题
由于RequestContextHolder
使用ThreadLocal
共享数据,所以在开启异步时获取不到老请求的信息,自然也就无法共享cookie
了
在这种情况下,我们需要在开启异步的时候将老请求的RequestContextHolder
的数据设置进去
(5)运费收件信息获取 数据封装
1 2 3 4 5 @Data public class FareVo { private MemberAddressVo address; private BigDecimal fare; }
在页面将选中地址的id传给请求
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 @RequestMapping("/fare/{addrId}") public FareVo getFare (@PathVariable("addrId") Long addrId) { return wareInfoService.getFare(addrId); } @Override public FareVo getFare (Long addrId) { FareVo fareVo = new FareVo (); R info = memberFeignService.info(addrId); if (info.getCode() == 0 ) { MemberAddressVo address = info.getData("memberReceiveAddress" , new TypeReference <MemberAddressVo>() { }); fareVo.setAddress(address); String phone = address.getPhone(); String fare = phone.substring(phone.length() - 2 , phone.length()); fareVo.setFare(new BigDecimal (fare)); } return fareVo; }
4. 订单提交 (1)模型抽取 页面提交数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 @Data public class OrderSubmitVo { private Long addrId; private Integer payType; private String orderToken; private BigDecimal payPrice; private String remarks; }
成功后转发至支付页面携带数据
1 2 3 4 5 6 7 8 @Data public class SubmitOrderResponseVo { private OrderEntity order; private Integer code; }
(2)提交订单
提交订单成功,则携带返回数据转发至支付页面
提交订单失败,则携带错误信息重定向至确认页
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 @RequestMapping("/submitOrder") public String submitOrder (OrderSubmitVo submitVo, Model model, RedirectAttributes attributes) { try { SubmitOrderResponseVo responseVo=orderService.submitOrder(submitVo); Integer code = responseVo.getCode(); if (code==0 ){ model.addAttribute("order" , responseVo.getOrder()); return "pay" ; }else { String msg = "下单失败;" ; switch (code) { case 1 : msg += "防重令牌校验失败" ; break ; case 2 : msg += "商品价格发生变化" ; break ; } attributes.addFlashAttribute("msg" , msg); return "redirect:http://order.gulimall.com/toTrade" ; } }catch (Exception e){ if (e instanceof NoStockException){ String msg = "下单失败,商品无库存" ; attributes.addFlashAttribute("msg" , msg); } return "redirect:http://order.gulimall.com/toTrade" ; } }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 @Transactional @Override public SubmitOrderResponseVo submitOrder (OrderSubmitVo submitVo) { SubmitOrderResponseVo responseVo = new SubmitOrderResponseVo (); responseVo.setCode(0 ); MemberResponseVo memberResponseVo = LoginInterceptor.loginUser.get(); String script= "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end" ; Long execute = redisTemplate.execute(new DefaultRedisScript <>(script,Long.class), Arrays.asList(OrderConstant.USER_ORDER_TOKEN_PREFIX + memberResponseVo.getId()), submitVo.getOrderToken()); if (execute == 0L ) { responseVo.setCode(1 ); return responseVo; }else { OrderCreateTo order = createOrderTo(memberResponseVo,submitVo); BigDecimal payAmount = order.getOrder().getPayAmount(); BigDecimal payPrice = submitVo.getPayPrice(); if (Math.abs(payAmount.subtract(payPrice).doubleValue()) < 0.01 ) { saveOrder(order); List<OrderItemVo> orderItemVos = order.getOrderItems().stream().map((item) -> { OrderItemVo orderItemVo = new OrderItemVo (); orderItemVo.setSkuId(item.getSkuId()); orderItemVo.setCount(item.getSkuQuantity()); return orderItemVo; }).collect(Collectors.toList()); R r = wareFeignService.orderLockStock(orderItemVos); if (r.getCode()==0 ){ responseVo.setOrder(order.getOrder()); responseVo.setCode(0 ); return responseVo; }else { String msg = (String) r.get("msg" ); throw new NoStockException (msg); } }else { responseVo.setCode(2 ); return responseVo; } } }
1) 验证防重令牌 为防止在获取令牌、对比值和删除令牌之间发生错误导入令牌校验出错,我们必须使用脚本保证原子性操作
1 2 3 4 5 6 7 MemberResponseVo memberResponseVo = LoginInterceptor.loginUser.get();String script= "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end" ; Long execute = redisTemplate.execute(new DefaultRedisScript <>(script,Long.class), Arrays.asList(OrderConstant.USER_ORDER_TOKEN_PREFIX + memberResponseVo.getId()), submitVo.getOrderToken());if (execute == 0L ) { responseVo.setCode(1 ); return responseVo;
2) 创建订单、订单项 抽取模型
1 2 3 4 5 6 7 8 9 10 11 12 13 14 @Data public class OrderCreateTo { private OrderEntity order; private List<OrderItemEntity> orderItems; private BigDecimal payPrice; private BigDecimal fare; }
创建订单、订单项
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 OrderCreateTo order = createOrderTo(memberResponseVo,submitVo);private OrderCreateTo createOrderTo (MemberResponseVo memberResponseVo, OrderSubmitVo submitVo) { String orderSn = IdWorker.getTimeId(); OrderEntity entity = buildOrder(memberResponseVo, submitVo,orderSn); List<OrderItemEntity> orderItemEntities = buildOrderItems(orderSn); compute(entity, orderItemEntities); OrderCreateTo createTo = new OrderCreateTo (); createTo.setOrder(entity); createTo.setOrderItems(orderItemEntities); return createTo; }
构建订单
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 private OrderEntity buildOrder (MemberResponseVo memberResponseVo, OrderSubmitVo submitVo, String orderSn) { OrderEntity orderEntity = new OrderEntity (); orderEntity.setOrderSn(orderSn); orderEntity.setMemberId(memberResponseVo.getId()); orderEntity.setMemberUsername(memberResponseVo.getUsername()); FareVo fareVo = wareFeignService.getFare(submitVo.getAddrId()); BigDecimal fare = fareVo.getFare(); orderEntity.setFreightAmount(fare); MemberAddressVo address = fareVo.getAddress(); orderEntity.setReceiverName(address.getName()); orderEntity.setReceiverPhone(address.getPhone()); orderEntity.setReceiverPostCode(address.getPostCode()); orderEntity.setReceiverProvince(address.getProvince()); orderEntity.setReceiverCity(address.getCity()); orderEntity.setReceiverRegion(address.getRegion()); orderEntity.setReceiverDetailAddress(address.getDetailAddress()); orderEntity.setStatus(OrderStatusEnum.CREATE_NEW.getCode()); orderEntity.setConfirmStatus(0 ); orderEntity.setAutoConfirmDay(7 ); return orderEntity; }
构建订单项
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 private OrderItemEntity buildOrderItem (OrderItemVo item) { OrderItemEntity orderItemEntity = new OrderItemEntity (); Long skuId = item.getSkuId(); orderItemEntity.setSkuId(skuId); orderItemEntity.setSkuName(item.getTitle()); orderItemEntity.setSkuAttrsVals(StringUtils.collectionToDelimitedString(item.getSkuAttrValues(), ";" )); orderItemEntity.setSkuPic(item.getImage()); orderItemEntity.setSkuPrice(item.getPrice()); orderItemEntity.setSkuQuantity(item.getCount()); R r = productFeignService.getSpuBySkuId(skuId); if (r.getCode() == 0 ) { SpuInfoTo spuInfo = r.getData(new TypeReference <SpuInfoTo>() { }); orderItemEntity.setSpuId(spuInfo.getId()); orderItemEntity.setSpuName(spuInfo.getSpuName()); orderItemEntity.setSpuBrand(spuInfo.getBrandName()); orderItemEntity.setCategoryId(spuInfo.getCatalogId()); } orderItemEntity.setGiftGrowth(item.getPrice().multiply(new BigDecimal (item.getCount())).intValue()); orderItemEntity.setGiftIntegration(item.getPrice().multiply(new BigDecimal (item.getCount())).intValue()); orderItemEntity.setPromotionAmount(BigDecimal.ZERO); orderItemEntity.setCouponAmount(BigDecimal.ZERO); orderItemEntity.setIntegrationAmount(BigDecimal.ZERO); BigDecimal origin = orderItemEntity.getSkuPrice().multiply(new BigDecimal (orderItemEntity.getSkuQuantity())); BigDecimal realPrice = origin.subtract(orderItemEntity.getPromotionAmount()) .subtract(orderItemEntity.getCouponAmount()) .subtract(orderItemEntity.getIntegrationAmount()); orderItemEntity.setRealAmount(realPrice); return orderItemEntity; }
计算订单价格
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 private void compute (OrderEntity entity, List<OrderItemEntity> orderItemEntities) { BigDecimal total = BigDecimal.ZERO; BigDecimal promotion=new BigDecimal ("0.0" ); BigDecimal integration=new BigDecimal ("0.0" ); BigDecimal coupon=new BigDecimal ("0.0" ); Integer integrationTotal = 0 ; Integer growthTotal = 0 ; for (OrderItemEntity orderItemEntity : orderItemEntities) { total=total.add(orderItemEntity.getRealAmount()); promotion=promotion.add(orderItemEntity.getPromotionAmount()); integration=integration.add(orderItemEntity.getIntegrationAmount()); coupon=coupon.add(orderItemEntity.getCouponAmount()); integrationTotal += orderItemEntity.getGiftIntegration(); growthTotal += orderItemEntity.getGiftGrowth(); } entity.setTotalAmount(total); entity.setPromotionAmount(promotion); entity.setIntegrationAmount(integration); entity.setCouponAmount(coupon); entity.setIntegration(integrationTotal); entity.setGrowth(growthTotal); entity.setPayAmount(entity.getFreightAmount().add(total)); entity.setDeleteStatus(0 ); }
3) 验价 将页面提交的价格和后台计算的价格进行对比,若不同则提示用户商品价格发生变化
1 2 3 4 5 6 7 8 9 BigDecimal payAmount = order.getOrder().getPayAmount();BigDecimal payPrice = submitVo.getPayPrice();if (Math.abs(payAmount.subtract(payPrice).doubleValue()) < 0.01 ) { }else { responseVo.setCode(2 ); return responseVo; }
4) 保存订单 1 2 3 4 5 6 7 private void saveOrder (OrderCreateTo orderCreateTo) { OrderEntity order = orderCreateTo.getOrder(); order.setCreateTime(new Date ()); order.setModifyTime(new Date ()); this .save(order); orderItemService.saveBatch(orderCreateTo.getOrderItems()); }
5) 锁定库存 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 List<OrderItemVo> orderItemVos = order.getOrderItems().stream().map((item) -> { OrderItemVo orderItemVo = new OrderItemVo (); orderItemVo.setSkuId(item.getSkuId()); orderItemVo.setCount(item.getSkuQuantity()); return orderItemVo; }).collect(Collectors.toList()); R r = wareFeignService.orderLockStock(orderItemVos); if (r.getCode()==0 ){ responseVo.setOrder(order.getOrder()); responseVo.setCode(0 ); return responseVo; }else { String msg = (String) r.get("msg" ); throw new NoStockException (msg); }
找出所有库存大于商品数的仓库
遍历所有满足条件的仓库,逐个尝试锁库存,若锁库存成功则退出遍历
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 @RequestMapping("/lock/order") public R orderLockStock (@RequestBody List<OrderItemVo> itemVos) { try { Boolean lock = wareSkuService.orderLockStock(itemVos); return R.ok(); } catch (NoStockException e) { return R.error(BizCodeEnum.NO_STOCK_EXCEPTION.getCode(), BizCodeEnum.NO_STOCK_EXCEPTION.getMsg()); } } @Transactional @Override public Boolean orderLockStock (List<OrderItemVo> itemVos) { List<SkuLockVo> lockVos = itemVos.stream().map((item) -> { SkuLockVo skuLockVo = new SkuLockVo (); skuLockVo.setSkuId(item.getSkuId()); skuLockVo.setNum(item.getCount()); List<Long> wareIds = baseMapper.listWareIdsHasStock(item.getSkuId(), item.getCount()); skuLockVo.setWareIds(wareIds); return skuLockVo; }).collect(Collectors.toList()); for (SkuLockVo lockVo : lockVos) { boolean lock = true ; Long skuId = lockVo.getSkuId(); List<Long> wareIds = lockVo.getWareIds(); if (wareIds == null || wareIds.size() == 0 ) { throw new NoStockException (skuId); }else { for (Long wareId : wareIds) { Long count=baseMapper.lockWareSku(skuId, lockVo.getNum(), wareId); if (count==0 ){ lock=false ; }else { lock = true ; break ; } } } if (!lock) throw new NoStockException (skuId); } return true ; }
这里通过异常机制控制事务回滚,如果在锁定库存失败则抛出NoStockException
s,订单服务和库存服务都会回滚。
(3) 分布式事务 分布式系统中实现一致性的 raft 算法、paxos http://thesecretlivesofdata.com/raft/
分布式情况下,可能出现一些服务事务不一致的情况
远程服务假失败
远程服务执行完成后,下面其他方法出现异常
分布式事务的解决方案
1、2PC 模式
数据库支持的 2PC【2 phase commit 二阶提交】,又叫做 XA Transactions。 MySQL 从 5.5 版本开始支持,SQL Server 2005 开始支持,Oracle 7 开始支持。 其中,XA 是一个两阶段提交协议,该协议分为以下两个阶段: 第一阶段:事务协调器要求每个涉及到事务的数据库预提交(precommit)此操作,并反映是否可以提交. 第二阶段:事务协调器要求每个数据库提交数据。 其中,如果有任何一个数据库否决此次提交,那么所有数据库都会被要求回滚它们在此事务中的那部分信息。
XA 协议比较简单,而且一旦商业数据库实现了 XA 协议,使用分布式事务的成本也比较低。 XA 性能不理想,特别是在交易下单链路,往往并发量很高,XA 无法满足高并发场景 XA 目前在商业数据库支持的比较理想,在 mysql 数据库中支持的不太理想,mysql 的XA 实现,没有记录 prepare 阶段日志,主备切换回导致主库与备库数据不一致。 许多 nosql 也没有支持 XA,这让 XA 的应用场景变得非常狭隘。 也有 3PC,引入了超时机制(无论协调者还是参与者,在向对方发送请求后,若长时间未收到回应则做出相应处理)
2、柔性事务-TCC 事务
刚性事务:遵循 ACID 原则,强一致性。 柔性事务:遵循 BASE 理论,最终一致性; 与刚性事务不同,柔性事务允许一定时间内,不同节点的数据不一致,但要求最终一致。
一阶段 prepare 行为:调用 自定义 的 prepare 逻辑。 二阶段 commit 行为:调用 自定义 的 commit 逻辑。 二阶段 rollback 行为:调用 自定义 的 rollback 逻辑。 所谓 TCC 模式,是指支持把 自定义 的分支事务纳入到全局事务的管理中。
3、柔性事务-最大努力通知型方案 按规律进行通知,不保证数据一定能通知成功,但会提供可查询操作接口进行核对。这种方案主要用在与第三方系统通讯时,比如:调用微信或支付宝支付后的支付结果通知。这种 方案也是结合 MQ 进行实现,例如:通过 MQ 发送 http 请求,设置最大通知次数。达到通知次数后即不再通知。 案例:银行通知、商户通知等(各大交易业务平台间的商户通知:多次通知、查询校对、对账文件),支付宝的支付成功异步回调
4、柔性事务-可靠消息+最终一致性方案(异步确保型) 实现:业务处理服务在业务事务提交之前,向实时消息服务请求发送消息,实时消息服务只记录消息数据,而不是真正的发送。业务处理服务在业务事务提交之后,向实时消息服务确认发送。只有在得到确认发送指令后,实时消息服务才会真正发送。 防止消息丢失:
1、做好消息确认机制(pulisher,consumer【手动 ack】)
2、每一个发送的消息都在数据库做好记录。定期将失败的消息再次发送一遍
(4)使用seata解决分布式事务问题 导入依赖
1 2 3 4 <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-seata</artifactId > </dependency >
环境搭建
下载senta-server-0.7.1并修改register.conf
,使用nacos作为注册中心
1 2 3 4 5 6 7 8 9 registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { serverAddr = "#:8848" namespace = "public" cluster = "default" }
将register.conf
和file.conf
复制到需要开启分布式事务的根目录,并修改file.conf
vgroup_mapping.${application.name}-fescar-service-group = "default"
1 2 3 4 5 6 7 8 9 10 11 12 13 service { # vgroup->rgroup vgroup_mapping.gulimall-ware-fescar-service-group = "default" # only support single node default.grouplist = "127.0.0.1:8091" # degrade current not support enableDegrade = false # disable disable = false # unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent max.commit.retry.timeout = "-1" max.rollback.retry.timeout = "-1" }
使用seata包装数据源
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 @Configuration public class MySeataConfig { @Autowired DataSourceProperties dataSourceProperties; @Bean public DataSource dataSource (DataSourceProperties dataSourceProperties) { HikariDataSource dataSource = dataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build(); if (StringUtils.hasText(dataSourceProperties.getName())) { dataSource.setPoolName(dataSourceProperties.getName()); } return new DataSourceProxy (dataSource); } }
在大事务的入口标记注解@GlobalTransactional
开启全局事务,并且每个小事务标记注解@Transactional
1 2 3 4 5 @GlobalTransactional @Transactional @Override public SubmitOrderResponseVo submitOrder (OrderSubmitVo submitVo) {}
5. 使用消息队列实现最终一致性 (1) 延迟队列的定义与实现
x-dead-letter-routing-key:出现dead letter之后将dead letter重新按照指定的routing-key发送
针对订单模块创建以上消息队列,创建订单时消息会被发送至队列order.delay.queue
,经过TTL
的时间后消息会变成死信以order.release.order
的路由键经交换机转发至队列order.release.order.queue
,再通过监听该队列的消息来实现过期订单的处理
(2) 延迟队列使用场景
为什么不能用定时任务完成?
如果恰好在一次扫描后完成业务逻辑,那么就会等待两个扫描周期才能扫到过期的订单,不能保证时效性
(3) 定时关单与库存解锁主体逻辑
创建订单时消息会被发送至队列order.delay.queue
,经过TTL
的时间后消息会变成死信以order.release.order
的路由键经交换机转发至队列order.release.order.queue
,再通过监听该队列的消息来实现过期订单的处理
如果该订单已支付,则无需处理
否则说明该订单已过期,修改该订单的状态并通过路由键order.release.other
发送消息至队列stock.release.stock.queue
进行库存解锁
在库存锁定后通过路由键stock.locked
发送至延迟队列stock.delay.queue
,延迟时间到,死信通过路由键stock.release
转发至stock.release.stock.queue
,通过监听该队列进行判断当前订单状态,来确定库存是否需要解锁
由于关闭订单
和库存解锁
都有可能被执行多次,因此要保证业务逻辑的幂等性,在执行业务是重新查询当前的状态进行判断
订单关闭和库存解锁都会进行库存解锁的操作,来确保业务异常或者订单过期时库存会被可靠解锁
(4) 创建业务交换机和队列
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 @Configuration public class MyRabbitmqConfig { @Bean public Exchange orderEventExchange () { return new TopicExchange ("order-event-exchange" , true , false ); } @Bean public Queue orderDelayQueue () { HashMap<String, Object> arguments = new HashMap <>(); arguments.put("x-dead-letter-exchange" , "order-event-exchange" ); arguments.put("x-dead-letter-routing-key" , "order.release.order" ); arguments.put("x-message-ttl" , 60000 ); return new Queue ("order.delay.queue" ,true ,false ,false ,arguments); } @Bean public Queue orderReleaseQueue () { Queue queue = new Queue ("order.release.order.queue" , true , false , false ); return queue; } @Bean public Binding orderCreateBinding () { return new Binding ("order.delay.queue" , Binding.DestinationType.QUEUE, "order-event-exchange" , "order.create.order" , null ); } @Bean public Binding orderReleaseBinding () { return new Binding ("order.release.order.queue" , Binding.DestinationType.QUEUE, "order-event-exchange" , "order.release.order" , null ); } @Bean public Binding orderReleaseOrderBinding () { return new Binding ("stock.release.stock.queue" , Binding.DestinationType.QUEUE, "order-event-exchange" , "order.release.other.#" , null ); } }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 @Configuration public class MyRabbitmqConfig { @Bean public Exchange stockEventExchange () { return new TopicExchange ("stock-event-exchange" , true , false ); } @Bean public Queue stockDelayQueue () { HashMap<String, Object> arguments = new HashMap <>(); arguments.put("x-dead-letter-exchange" , "stock-event-exchange" ); arguments.put("x-dead-letter-routing-key" , "stock.release" ); arguments.put("x-message-ttl" , 120000 ); return new Queue ("stock.delay.queue" , true , false , false , arguments); } @Bean public Queue stockReleaseStockQueue () { return new Queue ("stock.release.stock.queue" , true , false , false , null ); } @Bean public Binding stockLockedBinding () { return new Binding ("stock.delay.queue" , Binding.DestinationType.QUEUE, "stock-event-exchange" , "stock.locked" , null ); } @Bean public Binding stockReleaseBinding () { return new Binding ("stock.release.stock.queue" , Binding.DestinationType.QUEUE, "stock-event-exchange" , "stock.release.#" , null ); } }
(5) 库存自动解锁 1)库存锁定 在库存锁定是添加以下逻辑
由于可能订单回滚的情况,所以为了能够得到库存锁定的信息,在锁定时需要记录库存工作单,其中包括订单信息和锁定库存时的信息(仓库id,商品id,锁了几件…)
在锁定成功后,向延迟队列发消息,带上库存锁定的相关信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 @Transactional @Override public Boolean orderLockStock (WareSkuLockVo wareSkuLockVo) { WareOrderTaskEntity taskEntity = new WareOrderTaskEntity (); taskEntity.setOrderSn(wareSkuLockVo.getOrderSn()); taskEntity.setCreateTime(new Date ()); wareOrderTaskService.save(taskEntity); List<OrderItemVo> itemVos = wareSkuLockVo.getLocks(); List<SkuLockVo> lockVos = itemVos.stream().map((item) -> { SkuLockVo skuLockVo = new SkuLockVo (); skuLockVo.setSkuId(item.getSkuId()); skuLockVo.setNum(item.getCount()); List<Long> wareIds = baseMapper.listWareIdsHasStock(item.getSkuId(), item.getCount()); skuLockVo.setWareIds(wareIds); return skuLockVo; }).collect(Collectors.toList()); for (SkuLockVo lockVo : lockVos) { boolean lock = true ; Long skuId = lockVo.getSkuId(); List<Long> wareIds = lockVo.getWareIds(); if (wareIds == null || wareIds.size() == 0 ) { throw new NoStockException (skuId); }else { for (Long wareId : wareIds) { Long count=baseMapper.lockWareSku(skuId, lockVo.getNum(), wareId); if (count==0 ){ lock=false ; }else { WareOrderTaskDetailEntity detailEntity = WareOrderTaskDetailEntity.builder() .skuId(skuId) .skuName("" ) .skuNum(lockVo.getNum()) .taskId(taskEntity.getId()) .wareId(wareId) .lockStatus(1 ).build(); wareOrderTaskDetailService.save(detailEntity); StockLockedTo lockedTo = new StockLockedTo (); lockedTo.setId(taskEntity.getId()); StockDetailTo detailTo = new StockDetailTo (); BeanUtils.copyProperties(detailEntity,detailTo); lockedTo.setDetailTo(detailTo); rabbitTemplate.convertAndSend("stock-event-exchange" ,"stock.locked" ,lockedTo); lock = true ; break ; } } } if (!lock) throw new NoStockException (skuId); } return true ; }
2)监听队列
延迟队列会将过期的消息路由至"stock.release.stock.queue"
,通过监听该队列实现库存的解锁
为保证消息的可靠到达,我们使用手动确认消息的模式,在解锁成功后确认消息,若出现异常则重新归队
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 @Component @RabbitListener(queues = {"stock.release.stock.queue"}) public class StockReleaseListener { @Autowired private WareSkuService wareSkuService; @RabbitHandler public void handleStockLockedRelease (StockLockedTo stockLockedTo, Message message, Channel channel) throws IOException { log.info("************************收到库存解锁的消息********************************" ); try { wareSkuService.unlock(stockLockedTo); channel.basicAck(message.getMessageProperties().getDeliveryTag(), false ); } catch (Exception e) { channel.basicReject(message.getMessageProperties().getDeliveryTag(),true ); } } }
3)库存解锁
如果工作单详情不为空,说明该库存锁定成功
查询最新的订单状态,如果订单不存在,说明订单提交出现异常回滚,或者订单处于已取消的状态,我们都对已锁定的库存进行解锁
如果工作单详情为空,说明库存未锁定,自然无需解锁
为保证幂等性,我们分别对订单的状态和工作单的状态都进行了判断,只有当订单过期且工作单显示当前库存处于锁定的状态时,才进行库存的解锁
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 @Override public void unlock (StockLockedTo stockLockedTo) { StockDetailTo detailTo = stockLockedTo.getDetailTo(); WareOrderTaskDetailEntity detailEntity = wareOrderTaskDetailService.getById(detailTo.getId()); if (detailEntity != null ) { WareOrderTaskEntity taskEntity = wareOrderTaskService.getById(stockLockedTo.getId()); R r = orderFeignService.infoByOrderSn(taskEntity.getOrderSn()); if (r.getCode() == 0 ) { OrderTo order = r.getData("order" , new TypeReference <OrderTo>() { }); if (order == null ||order.getStatus()== OrderStatusEnum.CANCLED.getCode()) { if (detailEntity.getLockStatus()== WareTaskStatusEnum.Locked.getCode()){ unlockStock(detailTo.getSkuId(), detailTo.getSkuNum(), detailTo.getWareId(), detailEntity.getId()); } } }else { throw new RuntimeException ("远程调用订单服务失败" ); } }else { } }
(6) 定时关单 1) 提交订单 1 2 3 4 5 6 7 8 9 10 11 @Transactional @Override public SubmitOrderResponseVo submitOrder (OrderSubmitVo submitVo) { rabbitTemplate.convertAndSend("order-event-exchange" ,"order.create.order" ,order.getOrder()); }
2) 监听队列 创建订单的消息会进入延迟队列,最终发送至队列order.release.order.queue
,因此我们对该队列进行监听,进行订单的关闭
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 @Component @RabbitListener(queues = {"order.release.order.queue"}) public class OrderCloseListener { @Autowired private OrderService orderService; @RabbitHandler public void listener (OrderEntity orderEntity, Message message, Channel channel) throws IOException { System.out.println("收到过期的订单信息,准备关闭订单" + orderEntity.getOrderSn()); long deliveryTag = message.getMessageProperties().getDeliveryTag(); try { orderService.closeOrder(orderEntity); channel.basicAck(deliveryTag,false ); } catch (Exception e){ channel.basicReject(deliveryTag,true ); } } }
3) 关闭订单
由于要保证幂等性,因此要查询最新的订单状态判断是否需要关单
关闭订单后也需要解锁库存,因此发送消息进行库存、会员服务对应的解锁
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 @Override public void closeOrder (OrderEntity orderEntity) { OrderEntity newOrderEntity = this .getById(orderEntity.getId()); if (newOrderEntity.getStatus() == OrderStatusEnum.CREATE_NEW.getCode()) { OrderEntity updateOrder = new OrderEntity (); updateOrder.setId(newOrderEntity.getId()); updateOrder.setStatus(OrderStatusEnum.CANCLED.getCode()); this .updateById(updateOrder); OrderTo orderTo = new OrderTo (); BeanUtils.copyProperties(newOrderEntity,orderTo); rabbitTemplate.convertAndSend("order-event-exchange" , "order.release.other" ,orderTo); } }
4) 解锁库存 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @Slf4j @Component @RabbitListener(queues = {"stock.release.stock.queue"}) public class StockReleaseListener { @Autowired private WareSkuService wareSkuService; @RabbitHandler public void handleStockLockedRelease (StockLockedTo stockLockedTo, Message message, Channel channel) throws IOException { log.info("************************收到库存解锁的消息********************************" ); try { wareSkuService.unlock(stockLockedTo); channel.basicAck(message.getMessageProperties().getDeliveryTag(), false ); } catch (Exception e) { channel.basicReject(message.getMessageProperties().getDeliveryTag(),true ); } } @RabbitHandler public void handleStockLockedRelease (OrderTo orderTo, Message message, Channel channel) throws IOException { log.info("************************从订单模块收到库存解锁的消息********************************" ); try { wareSkuService.unlock(orderTo); channel.basicAck(message.getMessageProperties().getDeliveryTag(), false ); } catch (Exception e) { channel.basicReject(message.getMessageProperties().getDeliveryTag(),true ); } } }
1 2 3 4 5 6 7 8 9 10 11 @Override public void unlock (OrderTo orderTo) { String orderSn = orderTo.getOrderSn(); WareOrderTaskEntity taskEntity = wareOrderTaskService.getBaseMapper().selectOne((new QueryWrapper <WareOrderTaskEntity>().eq("order_sn" , orderSn))); List<WareOrderTaskDetailEntity> lockDetails = wareOrderTaskDetailService.list(new QueryWrapper <WareOrderTaskDetailEntity>().eq("task_id" , taskEntity.getId()).eq("lock_status" , WareTaskStatusEnum.Locked.getCode())); for (WareOrderTaskDetailEntity lockDetail : lockDetails) { unlockStock(lockDetail.getSkuId(),lockDetail.getSkuNum(),lockDetail.getWareId(),lockDetail.getId()); } }
6. 支付 (1) 支付宝加密原理
支付宝加密采用RSA非对称加密,分别在商户端和支付宝端有两对公钥和私钥
在发送订单数据时,直接使用明文,但会使用商户私钥
加一个对应的签名,支付宝端会使用商户公钥
对签名进行验签,只有数据明文和签名对应的时候才能说明传输正确
支付成功后,支付宝发送支付成功数据之外,还会使用支付宝私钥
加一个对应的签名,商户端收到支付成功数据之后也会使用支付宝公钥
延签,成功后才能确认
(2) 配置支付宝沙箱环境
(3) 环境搭建 导入支付宝sdk
1 2 3 4 5 <dependency > <groupId > com.alipay.sdk</groupId > <artifactId > alipay-sdk-java</artifactId > <version > 4.9.28.ALL</version > </dependency >
抽取支付工具类并进行配置
成功调用该接口后,返回的数据就是支付页面的html,因此后续会使用@ResponseBody
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 @ConfigurationProperties(prefix = "alipay") @Component @Data public class AlipayTemplate { private String app_id = "2016102600763190" ; private String merchant_private_key = "MjXN6Hnj8k2GAriRFt0BS9gjihbl9Rt38VMNbBi3Vt3Cy6TOwANLLJ/DfnYjRqwCG81fkyKlDqdsamdfCiTysCa0gQKBgQDYQ45LSRxAOTyM5NliBmtev0lbpDa7FqXL0UFgBel5VgA1Ysp0+6ex2n73NBHbaVPEXgNMnTdzU3WF9uHF4Gj0mfUzbVMbj/YkkHDOZHBggAjEHCB87IKowq/uAH/++Qes2GipHHCTJlG6yejdxhOsMZXdCRnidNx5yv9+2JI37QKBgQCw0xn7ZeRBIOXxW7xFJw1WecUV7yaL9OWqKRHat3lFtf1Qo/87cLl+KeObvQjjXuUe07UkrS05h6ijWyCFlBo2V7Cdb3qjq4atUwScKfTJONnrF+fwTX0L5QgyQeDX5a4yYp4pLmt6HKh34sI5S/RSWxDm7kpj+/MjCZgp6Xc51g==" ; private String alipay_public_key = "MIIBIjA74UKxt2F8VMIRKrRAAAuIMuawIsl4Ye+G12LK8P1ZLYy7ZJpgZ+Wv5nOs3DdoEazgCERj/ON8lM1KBHZOAV+TkrIcyi7cD1gfv4a1usikrUqm8/qhFvoiUfyHJFv1ymT7C4BI6aHzQ2zcUlSQPGoPl4C11tgnSkm3DlH2JZKgaIMcCOnNH+qctjNh9yIV9zat2qUiXbxmrCTtxAmiI3I+eVsUNwvwIDAQAB" ; private String notify_url="http://**.natappfree.cc/payed/notify" ; private String return_url="http://order.gulimall.com/memberOrder.html" ; private String sign_type = "RSA2" ; private String charset = "utf-8" ; private String gatewayUrl = "https://openapi.alipaydev.com/gateway.do" ; public String pay (PayVo vo) throws AlipayApiException { AlipayClient alipayClient = new DefaultAlipayClient (gatewayUrl, app_id, merchant_private_key, "json" , charset, alipay_public_key, sign_type); AlipayTradePagePayRequest alipayRequest = new AlipayTradePagePayRequest (); alipayRequest.setReturnUrl(return_url); alipayRequest.setNotifyUrl(notify_url); String out_trade_no = vo.getOut_trade_no(); String total_amount = vo.getTotal_amount(); String subject = vo.getSubject(); String body = vo.getBody(); alipayRequest.setBizContent("{\"out_trade_no\":\"" + out_trade_no +"\"," + "\"total_amount\":\"" + total_amount +"\"," + "\"subject\":\"" + subject +"\"," + "\"body\":\"" + body +"\"," + "\"product_code\":\"FAST_INSTANT_TRADE_PAY\"}" ); String result = alipayClient.pageExecute(alipayRequest).getBody(); System.out.println("支付宝的响应:" +result); return result; }
(4) 订单支付与同步通知 点击支付跳转到支付接口
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 @ResponseBody @GetMapping(value = "/aliPayOrder",produces = "text/html") public String aliPayOrder (@RequestParam("orderSn") String orderSn) throws AlipayApiException { System.out.println("接收到订单信息orderSn:" +orderSn); PayVo payVo = orderService.getOrderPay(orderSn); String pay = alipayTemplate.pay(payVo); return pay; } @Override public PayVo getOrderPay (String orderSn) { OrderEntity orderEntity = this .getOne(new QueryWrapper <OrderEntity>().eq("order_sn" , orderSn)); PayVo payVo = new PayVo (); payVo.setOut_trade_no(orderSn); BigDecimal payAmount = orderEntity.getPayAmount().setScale(2 , BigDecimal.ROUND_UP); payVo.setTotal_amount(payAmount.toString()); List<OrderItemEntity> orderItemEntities = orderItemService.list(new QueryWrapper <OrderItemEntity>().eq("order_sn" , orderSn)); OrderItemEntity orderItemEntity = orderItemEntities.get(0 ); payVo.setSubject(orderItemEntity.getSkuName()); payVo.setBody(orderItemEntity.getSkuAttrsVals()); return payVo; }
设置成功回调地址为订单详情页
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 private String return_url="http://order.gulimall.com/memberOrder.html" ; @RequestMapping("/memberOrder.html") public String memberOrder (@RequestParam(value = "pageNum",required = false,defaultValue = "0") Integer pageNum,Model model) { Map<String, Object> params = new HashMap <>(); params.put("page" , pageNum.toString()); PageUtils page = orderService.getMemberOrderPage(params); model.addAttribute("pageUtil" , page); return "list" ; }
(5) 异步通知
订单支付成功后支付宝会回调商户接口,这个时候需要修改订单状态
由于同步跳转可能由于网络问题失败,所以使用异步通知
支付宝使用的是最大努力通知方案,保障数据一致性,隔一段时间会通知商户支付成功,直到返回success
1)内网穿透设置异步通知地址
将外网映射到本地的order.gulimall.com:80
由于回调的请求头不是order.gulimall.com
,因此nginx转发到网关后找不到对应的服务,所以需要对nginx进行设置
将/payed/notify
异步通知转发至订单服务
设置异步通知的地址
1 2 3 private String notify_url="http://****.natappfree.cc/payed/notify" ;
2)验证签名 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 @PostMapping("/payed/notify") public String handlerAlipay (HttpServletRequest request, PayAsyncVo payAsyncVo) throws AlipayApiException { System.out.println("收到支付宝异步通知******************" ); Map<String, String> params = new HashMap <>(); Map<String, String[]> requestParams = request.getParameterMap(); for (String name : requestParams.keySet()) { String[] values = requestParams.get(name); String valueStr = "" ; for (int i = 0 ; i < values.length; i++) { valueStr = (i == values.length - 1 ) ? valueStr + values[i] : valueStr + values[i] + "," ; } params.put(name, valueStr); } boolean signVerified = AlipaySignature.rsaCheckV1(params, alipayTemplate.getAlipay_public_key(), alipayTemplate.getCharset(), alipayTemplate.getSign_type()); if (signVerified){ System.out.println("支付宝异步通知验签成功" ); orderService.handlerPayResult(payAsyncVo); return "success" ; }else { System.out.println("支付宝异步通知验签失败" ); return "error" ; } }
3)修改订单状态与保存交易流水 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 @Override public void handlerPayResult (PayAsyncVo payAsyncVo) { PaymentInfoEntity infoEntity = new PaymentInfoEntity (); String orderSn = payAsyncVo.getOut_trade_no(); infoEntity.setOrderSn(orderSn); infoEntity.setAlipayTradeNo(payAsyncVo.getTrade_no()); infoEntity.setSubject(payAsyncVo.getSubject()); String trade_status = payAsyncVo.getTrade_status(); infoEntity.setPaymentStatus(trade_status); infoEntity.setCreateTime(new Date ()); infoEntity.setCallbackTime(payAsyncVo.getNotify_time()); paymentInfoService.save(infoEntity); if (trade_status.equals("TRADE_SUCCESS" ) || trade_status.equals("TRADE_FINISHED" )) { baseMapper.updateOrderStatus(orderSn, OrderStatusEnum.PAYED.getCode(), PayConstant.ALIPAY); }
4) 异步通知的参数 1 2 3 4 5 6 7 8 9 10 @PostMapping("/payed/notify") public String handlerAlipay (HttpServletRequest request) { System.out.println("收到支付宝异步通知******************" ); Map<String, String[]> parameterMap = request.getParameterMap(); for (String key : parameterMap.keySet()) { String value = request.getParameter(key); System.out.println("key:" +key+"===========>value:" +value); } return "success" ; }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 收到支付宝异步通知****************** key:gmt_create===========>value:2020-10-18 09:13:26 key:charset===========>value:utf-8 key:gmt_payment===========>value:2020-10-18 09:13:34 key:notify_time===========>value:2020-10-18 09:13:35 key:subject===========>value:华为 key:sign===========>value:aqhKWzgzTLE84Scy5d8i3f+t9f7t7IE5tK/s5iHf3SdFQXPnTt6MEVtbr15ZXmITEo015nCbSXaUFJvLiAhWpvkNEd6ysraa+2dMgotuHPIHnIUFwvdk+U4Ez+2A4DBTJgmwtc5Ay8mYLpHLNR9ASuEmkxxK2F3Ov6MO0d+1DOjw9c/CCRRBWR8NHSJePAy/UxMzULLtpMELQ1KUVHLgZC5yym5TYSuRmltYpLHOuoJhJw8vGkh2+4FngvjtS7SBhEhR1GvJCYm1iXRFTNgP9Fmflw+EjxrDafCIA+r69ZqoJJ2Sk1hb4cBsXgNrFXR2Uj4+rQ1Ec74bIjT98f1KpA== key:buyer_id===========>value:2088622954825223 key:body===========>value:上市年份:2020;内存:64G key:invoice_amount===========>value:6300.00 key:version===========>value:1.0 key:notify_id===========>value:2020101800222091334025220507700182 key:fund_bill_list===========>value:[{"amount":"6300.00","fundChannel":"ALIPAYACCOUNT"}] key:notify_type===========>value:trade_status_sync key:out_trade_no===========>value:12345523123 key:total_amount===========>value:6300.00 key:trade_status===========>value:TRADE_SUCCESS key:trade_no===========>value:2020101822001425220501264292 key:auth_app_id===========>value:2016102600763190 key:receipt_amount===========>value:6300.00 key:point_amount===========>value:0.00 key:app_id===========>value:2016102600763190 key:buyer_pay_amount===========>value:6300.00 key:sign_type===========>value:RSA2 key:seller_id===========>value:2088102181115314
各参数详细意义见支付宝开放平台异步通知
(6) 收单 由于可能出现订单已经过期后,库存已经解锁,但支付成功后再修改订单状态的情况,需要设置支付有效时间,只有在有效期内才能进行支付
1 2 3 4 5 6 7 alipayRequest.setBizContent("{\"out_trade_no\":\"" + out_trade_no +"\"," + "\"total_amount\":\"" + total_amount +"\"," + "\"subject\":\"" + subject +"\"," + "\"body\":\"" + body +"\"," +"\"timeout_express\":\"1m\"," + "\"product_code\":\"FAST_INSTANT_TRADE_PAY\"}" );
超时后订单显示
秒杀服务 1. 秒杀(高并发)系统关注的问题
2. 秒杀架构设计 (1) 秒杀架构图
项目独立部署,独立秒杀模块gulimall-seckill
使用定时任务每天三点上架最新秒杀商品,削减高峰期压力
秒杀链接加密,为秒杀商品添加唯一商品随机码,在开始秒杀时才暴露接口
库存预热,先从数据库中扣除一部分库存以redisson 信号量
的形式存储在redis中
队列削峰,秒杀成功后立即返回,然后以发送消息的形式创建订单
(2) 存储模型设计
秒杀场次存储的List
可以当做hash key
在SECKILL_CHARE_PREFIX
中获得对应的商品数据
1 2 3 4 5 6 7 8 9 10 11 12 13 private final String SESSION_CACHE_PREFIX = "seckill:sessions:" ;private final String SECKILL_CHARE_PREFIX = "seckill:skus" ;private final String SKU_STOCK_SEMAPHORE = "seckill:stock:" ;
存储后的效果
用来存储的to
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 @Data public class SeckillSkuRedisTo { private Long id; private Long promotionId; private Long promotionSessionId; private Long skuId; private BigDecimal seckillPrice; private Integer seckillCount; private Integer seckillLimit; private Integer seckillSort; private SkuInfoVo skuInfoVo; private Long startTime; private Long endTime; private String randomCode; }
3. 商品上架 (1) 定时上架
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 private final String upload_lock = "seckill:upload:lock" ; @Async @Scheduled(cron = "0 0 3 * * ?") public void uploadSeckillSkuLatest3Days () { RLock lock = redissonClient.getLock(upload_lock); try { lock.lock(10 , TimeUnit.SECONDS); secKillService.uploadSeckillSkuLatest3Days(); }catch (Exception e){ e.printStackTrace(); }finally { lock.unlock(); } } @Override public void uploadSeckillSkuLatest3Days () { R r = couponFeignService.getSeckillSessionsIn3Days(); if (r.getCode() == 0 ) { List<SeckillSessionWithSkusVo> sessions = r.getData(new TypeReference <List<SeckillSessionWithSkusVo>>() { }); saveSecKillSession(sessions); saveSecKillSku(sessions); } }
(2) 获取最近三天的秒杀信息
获取最近三天的秒杀场次信息,再通过秒杀场次id查询对应的商品信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 @Override public List<SeckillSessionEntity> getSeckillSessionsIn3Days () { QueryWrapper<SeckillSessionEntity> queryWrapper = new QueryWrapper <SeckillSessionEntity>() .between("start_time" , getStartTime(), getEndTime()); List<SeckillSessionEntity> seckillSessionEntities = this .list(queryWrapper); List<SeckillSessionEntity> list = seckillSessionEntities.stream().map(session -> { List<SeckillSkuRelationEntity> skuRelationEntities = seckillSkuRelationService.list(new QueryWrapper <SeckillSkuRelationEntity>().eq("promotion_session_id" , session.getId())); session.setRelations(skuRelationEntities); return session; }).collect(Collectors.toList()); return list; } private String getStartTime () { LocalDate now = LocalDate.now(); LocalDateTime time = now.atTime(LocalTime.MIN); String format = time.format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss" )); return format; } private String getEndTime () { LocalDate now = LocalDate.now(); LocalDateTime time = now.plusDays(2 ).atTime(LocalTime.MAX); String format = time.format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss" )); return format; }
(3) 在redis中保存秒杀场次信息 1 2 3 4 5 6 7 8 9 10 11 12 private void saveSecKillSession (List<SeckillSessionWithSkusVo> sessions) { sessions.stream().forEach(session->{ String key = SESSION_CACHE_PREFIX + session.getStartTime().getTime() + "_" + session.getEndTime().getTime(); if (!redisTemplate.hasKey(key)){ List<String> values = session.getRelations().stream() .map(sku -> sku.getPromotionSessionId() +"-" + sku.getSkuId()) .collect(Collectors.toList()); redisTemplate.opsForList().leftPushAll(key,values); } }); }
(4) 在redis中保存秒杀商品信息 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 private void saveSecKillSku (List<SeckillSessionWithSkusVo> sessions) { BoundHashOperations<String, Object, Object> ops = redisTemplate.boundHashOps(SECKILL_CHARE_PREFIX); sessions.stream().forEach(session->{ session.getRelations().stream().forEach(sku->{ String key = sku.getPromotionSessionId() +"-" + sku.getSkuId(); if (!ops.hasKey(key)){ SeckillSkuRedisTo redisTo = new SeckillSkuRedisTo (); BeanUtils.copyProperties(sku,redisTo); redisTo.setStartTime(session.getStartTime().getTime()); redisTo.setEndTime(session.getEndTime().getTime()); R r = productFeignService.info(sku.getSkuId()); if (r.getCode() == 0 ) { SkuInfoVo skuInfo = r.getData("skuInfo" , new TypeReference <SkuInfoVo>() { }); redisTo.setSkuInfoVo(skuInfo); } String token = UUID.randomUUID().toString().replace("-" , "" ); redisTo.setRandomCode(token); String jsonString = JSON.toJSONString(redisTo); ops.put(key,jsonString); RSemaphore semaphore = redissonClient.getSemaphore(SKU_STOCK_SEMAPHORE + token); semaphore.trySetPermits(sku.getSeckillCount()); } }); }); }
4. 获取当前秒杀商品 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 @GetMapping(value = "/getCurrentSeckillSkus") @ResponseBody public R getCurrentSeckillSkus () { List<SeckillSkuRedisTo> vos = secKillService.getCurrentSeckillSkus(); return R.ok().setData(vos); } @Override public List<SeckillSkuRedisTo> getCurrentSeckillSkus () { Set<String> keys = redisTemplate.keys(SESSION_CACHE_PREFIX + "*" ); long currentTime = System.currentTimeMillis(); for (String key : keys) { String replace = key.replace(SESSION_CACHE_PREFIX, "" ); String[] split = replace.split("_" ); long startTime = Long.parseLong(split[0 ]); long endTime = Long.parseLong(split[1 ]); if (currentTime > startTime && currentTime < endTime) { List<String> range = redisTemplate.opsForList().range(key, -100 , 100 ); BoundHashOperations<String, Object, Object> ops = redisTemplate.boundHashOps(SECKILL_CHARE_PREFIX); List<SeckillSkuRedisTo> collect = range.stream().map(s -> { String json = (String) ops.get(s); SeckillSkuRedisTo redisTo = JSON.parseObject(json, SeckillSkuRedisTo.class); return redisTo; }).collect(Collectors.toList()); return collect; } } return null ; }
首页获取并拼装数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 <div class ="swiper-slide" > <ul id ="seckillSkuContent" > </ul > </div > <script type ="text/javascript" > $.get ("http://seckill.gulimall.com/getCurrentSeckillSkus" , function (res ) { if (res.data .length > 0 ) { res.data .forEach (function (item ) { $("<li onclick='toDetail(" + item.skuId + ")'></li>" ).append ($("<img style='width: 130px; height: 130px' src='" + item.skuInfoVo .skuDefaultImg + "' />" )) .append ($("<p>" +item.skuInfoVo .skuTitle +"</p>" )) .append ($("<span>" + item.seckillPrice + "</span>" )) .append ($("<s>" + item.skuInfoVo .price + "</s>" )) .appendTo ("#seckillSkuContent" ); }) } }) function toDetail (skuId ) { location.href = "http://item.gulimall.com/" + skuId + ".html" ; } </script >
首页展示效果
5. 获取当前商品的秒杀信息 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 @ResponseBody @GetMapping(value = "/getSeckillSkuInfo/{skuId}") public R getSeckillSkuInfo (@PathVariable("skuId") Long skuId) { SeckillSkuRedisTo to = secKillService.getSeckillSkuInfo(skuId); return R.ok().setData(to); } @Override public SeckillSkuRedisTo getSeckillSkuInfo (Long skuId) { BoundHashOperations<String, String, String> ops = redisTemplate.boundHashOps(SECKILL_CHARE_PREFIX); Set<String> keys = ops.keys(); for (String key : keys) { if (Pattern.matches("\\d-" + skuId,key)) { String v = ops.get(key); SeckillSkuRedisTo redisTo = JSON.parseObject(v, SeckillSkuRedisTo.class); if (redisTo!=null ){ long current = System.currentTimeMillis(); if (redisTo.getStartTime() < current && redisTo.getEndTime() > current) { return redisTo; } redisTo.setRandomCode(null ); return redisTo; } } } return null ; }
在查询商品详情页的接口中查询秒杀对应信息
更改商品详情页的显示效果
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 <li style ="color: red" th:if ="${item.seckillSkuVo != null}" > <span th:if ="${#dates.createNow().getTime() < item.seckillSkuVo.startTime}" > 商品将会在[[${#dates.format(new java.util.Date(item.seckillSkuVo.startTime),"yyyy-MM-dd HH:mm:ss")}]]进行秒杀 </span > <span th:if ="${#dates.createNow().getTime() >= item.seckillSkuVo.startTime && #dates.createNow().getTime() <= item.seckillSkuVo.endTime}" > 秒杀价 [[${#numbers.formatDecimal(item.seckillSkuVo.seckillPrice,1,2)}]] </span > </li > <div class ="box-btns-two" th:if ="${item.seckillSkuVo == null }" > <a class ="addToCart" href ="http://cart.gulimall.com/addToCart" th:attr ="skuId=${item.info.skuId}" > 加入购物车 </a > </div > <div class ="box-btns-two" th:if ="${item.seckillSkuVo != null && (#dates.createNow().getTime() >= item.seckillSkuVo.startTime && #dates.createNow().getTime() <= item.seckillSkuVo.endTime)}" > <a class ="seckill" href ="#" th:attr ="skuId=${item.info.skuId},sessionId=${item.seckillSkuVo.promotionSessionId},code=${item.seckillSkuVo.randomCode}" > 立即抢购 </a > </div >
页面显示效果
6. 秒杀 (1) 秒杀接口
页面跳转效果
Sentinel服务流控、熔断和降级 sentinel的基础知识参考官方文档
1. 环境搭建 导入依赖
1 2 3 4 5 6 7 8 9 10 <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-actuator</artifactId > <version > 2.1.8.RELEASE</version > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-sentinel</artifactId > </dependency >
基本配置
1 2 3 4 5 6 7 8 9 10 11 12 spring: cloud: sentinel: transport: dashboard: localhost:8080 management: endpoints: web: exposure: include: '*'
流控规则设置
触发流控的效果
2. 自定义流控响应 1 2 3 4 5 6 7 8 9 @Component public class GulimallSentinelConfig implements UrlBlockHandler { @Override public void blocked (HttpServletRequest request, HttpServletResponse response, BlockException ex) throws IOException { R r = R.error(BizCodeEnum.SECKILL_EXCEPTION.getCode(),BizCodeEnum.SECKILL_EXCEPTION.getMsg()); response.setContentType("application/json;charset=utf-8" ); response.getWriter().write(JSON.toJSONString(r)); } }
3. 网关流控 如果能在网关层就进行流控,可以避免请求流入业务,减小服务压力
1 2 3 4 5 6 <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-alibaba-sentinel-gateway</artifactId > <version > 2.1.0.RELEASE</version > </dependency >
4. feign的流控和降级 默认情况下,sentinel是不会对feign进行监控的,需要开启配置
1 2 3 feign: sentinel: enabled: true
开启后的效果
feign的降级
在@FeignClient
设置fallback
属性
1 2 3 4 5 6 @FeignClient(value = "gulimall-seckill",fallback = SeckillFallbackService.class) public interface SeckillFeignService { @ResponseBody @GetMapping(value = "/getSeckillSkuInfo/{skuId}") R getSeckillSkuInfo (@PathVariable("skuId") Long skuId) ; }
在降级类中实现对应的feign接口
,并重写降级方法
1 2 3 4 5 6 7 @Component public class SeckillFallbackService implements SeckillFeignService { @Override public R getSeckillSkuInfo (Long skuId) { return R.error(BizCodeEnum.READ_TIME_OUT_EXCEPTION.getCode(), BizCodeEnum.READ_TIME_OUT_EXCEPTION.getMsg()); } }
降级效果
当远程服务被限流或者不可用时,会触发降级效果,如下所示
Zipkin链路追踪 由于微服务项目模块众多,相互之间的调用关系十分复杂,因此为了分析工作过程中的调用关系,需要使用zipkin来进行链路追踪
1. 环境搭建 下载jar包并运行
https://dl.bintray.com/openzipkin/maven/io/zipkin/java/zipkin-server/
导入依赖
1 2 3 4 5 <dependency > <groupId > org.springframework.cloud</groupId > <artifactId > spring-cloud-starter-zipkin</artifactId > </dependency >
配置
1 2 3 4 5 6 7 8 9 10 11 spring: zipkin: base-url: http://localhost:9411 sender: type: web discovery-client-enabled: false sleuth: sampler: probability: 1
2. 查询调用链路
其中可以看到请求的方式,请求时间,异步等信息
3. 查询依赖