范文健康探索娱乐情感热点
投稿投诉
热点动态
科技财经
情感日志
励志美文
娱乐时尚
游戏搞笑
探索旅游
历史星座
健康养生
美丽育儿
范文作文
教案论文

云原生Hiveonk8s环境部署

  一、概述
  Hive是基于Hadoop的一个数据仓库(Data Aarehouse,简称数仓、DW),可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能。是用于存储、分析、报告的数据系统。这里只讲部署,相关概念可以参考我这篇文章:大数据Hadoop之——数据仓库Hive
  Hive 架构
  Hive 客户端架构
  二、开始部署
  因为hive依赖与Hadoop,所以这里是在把hive封装在Hadoop ha on k8s 编排中,关于更多,可以参考:【云原生】Hadoop HA on k8s 环境部署 1)构建镜像
  Dockerfile FROM myharbor.com/bigdata/centos:7.9.2009  RUN rm -f /etc/localtime && ln -sv /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone  RUN export LANG=zh_CN.UTF-8  # 创建用户和用户组,跟yaml编排里的spec.template.spec.containers. securityContext.runAsUser: 9999 RUN groupadd --system --gid=9999 admin && useradd --system --home-dir /home/admin --uid=9999 --gid=admin admin  # 安装sudo RUN yum -y install sudo ; chmod 640 /etc/sudoers  # 给admin添加sudo权限 RUN echo "admin ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers  RUN yum -y install install net-tools telnet wget  RUN mkdir /opt/apache/  ADD jdk-8u212-linux-x64.tar.gz /opt/apache/   ENV JAVA_HOME=/opt/apache/jdk1.8.0_212 ENV PATH=$JAVA_HOME/bin:$PATH  ENV HADOOP_VERSION 3.3.2 ENV HADOOP_HOME=/opt/apache/hadoop  ENV HADOOP_COMMON_HOME=${HADOOP_HOME}      HADOOP_HDFS_HOME=${HADOOP_HOME}      HADOOP_MAPRED_HOME=${HADOOP_HOME}      HADOOP_YARN_HOME=${HADOOP_HOME}      HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop      PATH=${PATH}:${HADOOP_HOME}/bin  #RUN curl --silent --output /tmp/hadoop.tgz https://ftp-stud.hs-esslingen.de/pub/Mirrors/ftp.apache.org/dist/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz && tar --directory /opt/apache -xzf /tmp/hadoop.tgz && rm /tmp/hadoop.tgz ADD hadoop-${HADOOP_VERSION}.tar.gz /opt/apache RUN ln -s /opt/apache/hadoop-${HADOOP_VERSION} ${HADOOP_HOME}  ENV HIVE_VERSION 3.1.2 ADD hive-${HIVE_VERSION}.tar.gz /opt/apache/ ENV HIVE_HOME=/opt/apache/hive ENV PATH=$HIVE_HOME/bin:$PATH RUN ln -s /opt/apache/hive-${HIVE_VERSION} ${HIVE_HOME}  RUN chown -R admin:admin /opt/apache  WORKDIR /opt/apache  # Hdfs ports EXPOSE 50010 50020 50070 50075 50090 8020 9000  # Mapred ports EXPOSE 19888  #Yarn ports EXPOSE 8030 8031 8032 8033 8040 8042 8088  #Other ports EXPOSE 49707 2122
  开始构建镜像 docker build -t myharbor.com/bigdata/hadoop-hive:v3.3.2-3.1.2 . --no-cache  ### 参数解释 # -t:指定镜像名称 # . :当前目录Dockerfile # -f:指定Dockerfile路径 #  --no-cache:不缓存  docker push myharbor.com/bigdata/hadoop-hive:v3.3.2-3.1.22)添加 Metastore 服务编排1、配置
  hadoop/templates/hive/hive-configmap.yaml  apiVersion: v1 kind: ConfigMap metadata:   name: {{ include "hadoop.fullname" . }}-hive   labels:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     helm.sh/chart: {{ include "hadoop.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }} data:   hive-site.xml: |     <?xml version="1.0"?>     <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>                             hive.metastore.warehouse.dir         /user/hive_remote/warehouse                        hive.metastore.local         false                               javax.jdo.option.ConnectionURL         jdbc:mysql://mysql-primary-headless.mysql:3306/hive_metastore?createDatabaseIfNotExist=true&useSSL=false&serverTimezone=Asia/Shanghai                               javax.jdo.option.ConnectionDriverName         com.mysql.cj.jdbc.Driver-->                                        javax.jdo.option.ConnectionUserName         root                               javax.jdo.option.ConnectionPassword         WyfORdvwVm                               hive.metastore.schema.verification         false                        system:user.name         root         user name                        hive.metastore.uris         thrift://{{ include "hadoop.fullname" . }}-hive-metastore.{{ .Release.Namespace }}.svc.cluster.local:9083                               hive.server2.thrift.bind.host         0.0.0.0         Bind host on which to run the HiveServer2 Thrift service.                               hive.server2.thrift.port         10000                        hive.server2.active.passive.ha.enable         true             2、控制器
  hadoop/templates/hive/hiveserver2-statefulset.yaml  apiVersion: apps/v1 kind: StatefulSet metadata:   name: {{ include "hadoop.fullname" . }}-hive-metastore   annotations:     checksum/config: {{ include (print $.Template.BasePath "/hadoop-configmap.yaml") . | sha256sum }}   labels:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     helm.sh/chart: {{ include "hadoop.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/component: hive-metastore spec:   serviceName: {{ include "hadoop.fullname" . }}-hive-metastore   selector:     matchLabels:       app.kubernetes.io/name: {{ include "hadoop.name" . }}       app.kubernetes.io/instance: {{ .Release.Name }}       app.kubernetes.io/component: hive-metastore   replicas: {{ .Values.hive.metastore.replicas }}   template:     metadata:       labels:         app.kubernetes.io/name: {{ include "hadoop.name" . }}         app.kubernetes.io/instance: {{ .Release.Name }}         app.kubernetes.io/component: hive-metastore     spec:       affinity:         podAntiAffinity:         {{- if eq .Values.antiAffinity "hard" }}           requiredDuringSchedulingIgnoredDuringExecution:           - topologyKey: "kubernetes.io/hostname"             labelSelector:               matchLabels:                 app.kubernetes.io/name: {{ include "hadoop.name" . }}                 app.kubernetes.io/instance: {{ .Release.Name }}                 app.kubernetes.io/component: hive-metastore         {{- else if eq .Values.antiAffinity "soft" }}           preferredDuringSchedulingIgnoredDuringExecution:           - weight: 5             podAffinityTerm:               topologyKey: "kubernetes.io/hostname"               labelSelector:                 matchLabels:                   app.kubernetes.io/name: {{ include "hadoop.name" . }}                   app.kubernetes.io/instance: {{ .Release.Name }}                   app.kubernetes.io/component: hive-metastore         {{- end }}       terminationGracePeriodSeconds: 0       initContainers:       - name: wait-nn         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         command: ["sh", "-c", "until curl -m 3 -sI http://{{ include "hadoop.fullname" . }}-hdfs-nn-{{ sub .Values.hdfs.nameNode.replicas 1 }}.{{ include "hadoop.fullname" . }}-hdfs-nn.{{ .Release.Namespace }}.svc.cluster.local:9870 | egrep --silent "HTTP/1.1 200 OK|HTTP/1.1 302 Found"; do echo waiting for nn; sleep 1; done"]       containers:       - name: hive-metastore         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         imagePullPolicy: {{ .Values.image.pullPolicy | quote }}         command:            - "/bin/bash"            - "/opt/apache/tmp/hadoop-config/bootstrap.sh"            - "-d"         resources: {{ toYaml .Values.hive.metastore.resources | indent 10 }}         readinessProbe:           tcpSocket:             port: 9083           initialDelaySeconds: 10           timeoutSeconds: 2         livenessProbe:           tcpSocket:             port: 9083           initialDelaySeconds: 10           timeoutSeconds: 2         volumeMounts:         - name: hadoop-config           mountPath: /opt/apache/tmp/hadoop-config         - name: hive-config           mountPath: /opt/apache/hive/conf         securityContext:           runAsUser: {{ .Values.securityContext.runAsUser }}           privileged: {{ .Values.securityContext.privileged }}       volumes:       - name: hadoop-config         configMap:           name: {{ include "hadoop.fullname" . }}       - name: hive-config         configMap:           name: {{ include "hadoop.fullname" . }}-hive3、Service
  hadoop/templates/hive/metastore-svc.yaml  # A headless service to create DNS records apiVersion: v1 kind: Service metadata:   name: {{ include "hadoop.fullname" . }}-hive-metastore   labels:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     helm.sh/chart: {{ include "hadoop.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/component: hive-metastore spec:   ports:   - name: metastore     port: {{ .Values.service.hive.metastore.port }}     nodePort: {{ .Values.service.hive.metastore.nodePort }}   type: {{ .Values.service.hive.metastore.type }}   selector:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/component: hive-metastore3)添加 HiveServer2 服务编排1、控制器
  hadoop/templates/hive/hiveserver2-statefulset.yaml  apiVersion: apps/v1 kind: StatefulSet metadata:   name: {{ include "hadoop.fullname" . }}-hive-hiveserver2   annotations:     checksum/config: {{ include (print $.Template.BasePath "/hadoop-configmap.yaml") . | sha256sum }}   labels:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     helm.sh/chart: {{ include "hadoop.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/component: hive-hiveserver2 spec:   serviceName: {{ include "hadoop.fullname" . }}-hive-hiveserver2   selector:     matchLabels:       app.kubernetes.io/name: {{ include "hadoop.name" . }}       app.kubernetes.io/instance: {{ .Release.Name }}       app.kubernetes.io/component: hive-hiveserver2   replicas: {{ .Values.hive.hiveserver2.replicas }}   template:     metadata:       labels:         app.kubernetes.io/name: {{ include "hadoop.name" . }}         app.kubernetes.io/instance: {{ .Release.Name }}         app.kubernetes.io/component: hive-hiveserver2     spec:       affinity:         podAntiAffinity:         {{- if eq .Values.antiAffinity "hard" }}           requiredDuringSchedulingIgnoredDuringExecution:           - topologyKey: "kubernetes.io/hostname"             labelSelector:               matchLabels:                 app.kubernetes.io/name: {{ include "hadoop.name" . }}                 app.kubernetes.io/instance: {{ .Release.Name }}                 app.kubernetes.io/component: hive-hiveserver2         {{- else if eq .Values.antiAffinity "soft" }}           preferredDuringSchedulingIgnoredDuringExecution:           - weight: 5             podAffinityTerm:               topologyKey: "kubernetes.io/hostname"               labelSelector:                 matchLabels:                   app.kubernetes.io/name: {{ include "hadoop.name" . }}                   app.kubernetes.io/instance: {{ .Release.Name }}                   app.kubernetes.io/component: hive-hiveserver2         {{- end }}       terminationGracePeriodSeconds: 0       initContainers:       - name: wait-hive-metastore         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         command: ["sh", "-c", "until (echo "q")|telnet -e "q" {{ include "hadoop.fullname" . }}-hive-metastore.{{ .Release.Namespace }}.svc.cluster.local {{ .Values.service.hive.metastore.port }} >/dev/null 2>&1; do echo waiting for hive metastore; sleep 1; done"]       containers:       - name: hive-hiveserver2         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         imagePullPolicy: {{ .Values.image.pullPolicy | quote }}         command:            - "/bin/bash"            - "/opt/apache/tmp/hadoop-config/bootstrap.sh"            - "-d"         resources: {{ toYaml .Values.hive.metastore.resources | indent 10 }}         readinessProbe:           tcpSocket:             port: 10000           initialDelaySeconds: 10           timeoutSeconds: 2         livenessProbe:           tcpSocket:             port: 10000           initialDelaySeconds: 10           timeoutSeconds: 2         volumeMounts:         - name: hadoop-config           mountPath: /opt/apache/tmp/hadoop-config         - name: hive-config           mountPath: /opt/apache/hive/conf         securityContext:           runAsUser: {{ .Values.securityContext.runAsUser }}           privileged: {{ .Values.securityContext.privileged }}       volumes:       - name: hadoop-config         configMap:           name: {{ include "hadoop.fullname" . }}       - name: hive-config         configMap:           name: {{ include "hadoop.fullname" . }}-hive2、Service
  hadoop/templates/hive/hiveserver2-svc.yaml  # A headless service to create DNS records apiVersion: v1 kind: Service metadata:   name: {{ include "hadoop.fullname" . }}-hive-hiveserver2   labels:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     helm.sh/chart: {{ include "hadoop.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/component: hive-hiveserver2 spec:   ports:   - name: metastore     port: {{ .Values.service.hive.hiveserver2.port }}     nodePort: {{ .Values.service.hive.hiveserver2.nodePort }}   type: {{ .Values.service.hive.hiveserver2.type }}   selector:     app.kubernetes.io/name: {{ include "hadoop.name" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/component: hive-hiveserver24)修改values.yaml
  hadoop/values.yaml  image:   repository: myharbor.com/bigdata/hadoop-hive   tag: v3.3.2-3.1.2   pullPolicy: IfNotPresent  # The version of the hadoop libraries being used in the image. hadoopVersion: 3.3.2 logLevel: INFO  # Select antiAffinity as either hard or soft, default is soft antiAffinity: "soft"  hdfs:   nameNode:     replicas: 2     pdbMinAvailable: 1      resources:       requests:         memory: "256Mi"         cpu: "10m"       limits:         memory: "2048Mi"         cpu: "1000m"    dataNode:     # Will be used as dfs.datanode.hostname     # You still need to set up services + ingress for every DN     # Datanodes will expect to     externalHostname: example.com     externalDataPortRangeStart: 9866     externalHTTPPortRangeStart: 9864      replicas: 3      pdbMinAvailable: 1      resources:       requests:         memory: "256Mi"         cpu: "10m"       limits:         memory: "2048Mi"         cpu: "1000m"    webhdfs:     enabled: true    jounralNode:     replicas: 3     pdbMinAvailable: 1      resources:       requests:         memory: "256Mi"         cpu: "10m"       limits:         memory: "2048Mi"         cpu: "1000m"  hive:   metastore:     replicas: 1     pdbMinAvailable: 1      resources:       requests:         memory: "256Mi"         cpu: "10m"       limits:         memory: "2048Mi"         cpu: "1000m"    hiveserver2:     replicas: 1     pdbMinAvailable: 1      resources:       requests:         memory: "256Mi"         cpu: "10m"       limits:         memory: "1024Mi"         cpu: "500m"  yarn:   resourceManager:     pdbMinAvailable: 1     replicas: 2      resources:       requests:         memory: "256Mi"         cpu: "10m"       limits:         memory: "2048Mi"         cpu: "2000m"    nodeManager:     pdbMinAvailable: 1      # The number of YARN NodeManager instances.     replicas: 1      # Create statefulsets in parallel (K8S 1.7+)     parallelCreate: false      # CPU and memory resources allocated to each node manager pod.     # This should be tuned to fit your workload.     resources:       requests:         memory: "256Mi"         cpu: "500m"       limits:         memory: "2048Mi"         cpu: "1000m"  persistence:   nameNode:     enabled: true     storageClass: "hadoop-ha-nn-local-storage"     accessMode: ReadWriteOnce     size: 1Gi     local:     - name: hadoop-ha-nn-0       host: "local-168-182-110"       path: "/opt/bigdata/servers/hadoop-ha/nn/data/data1"     - name: hadoop-ha-nn-1       host: "local-168-182-111"       path: "/opt/bigdata/servers/hadoop-ha/nn/data/data1"    dataNode:     enabled: true     enabledStorageClass: false     storageClass: "hadoop-ha-dn-local-storage"     accessMode: ReadWriteOnce     size: 1Gi     local:     - name: hadoop-ha-dn-0       host: "local-168-182-110"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data1"     - name: hadoop-ha-dn-1       host: "local-168-182-110"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data2"     - name: hadoop-ha-dn-2       host: "local-168-182-110"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data3"     - name: hadoop-ha-dn-3       host: "local-168-182-111"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data1"     - name: hadoop-ha-dn-4       host: "local-168-182-111"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data2"     - name: hadoop-ha-dn-5       host: "local-168-182-111"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data3"     - name: hadoop-ha-dn-6       host: "local-168-182-112"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data1"     - name: hadoop-ha-dn-7       host: "local-168-182-112"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data2"     - name: hadoop-ha-dn-8       host: "local-168-182-112"       path: "/opt/bigdata/servers/hadoop-ha/dn/data/data3"     volumes:     - name: dfs1       mountPath: /opt/apache/hdfs/datanode1       hostPath: /opt/bigdata/servers/hadoop-ha/dn/data/data1     - name: dfs2       mountPath: /opt/apache/hdfs/datanode2       hostPath: /opt/bigdata/servers/hadoop-ha/dn/data/data2     - name: dfs3       mountPath: /opt/apache/hdfs/datanode3       hostPath: /opt/bigdata/servers/hadoop-ha/dn/data/data3    journalNode:     enabled: true     storageClass: "hadoop-ha-jn-local-storage"     accessMode: ReadWriteOnce     size: 1Gi     local:     - name: hadoop-ha-jn-0       host: "local-168-182-110"       path: "/opt/bigdata/servers/hadoop-ha/jn/data/data1"     - name: hadoop-ha-jn-1       host: "local-168-182-111"       path: "/opt/bigdata/servers/hadoop-ha/jn/data/data1"     - name: hadoop-ha-jn-2       host: "local-168-182-112"       path: "/opt/bigdata/servers/hadoop-ha/jn/data/data1"     volumes:     - name: jn       mountPath: /opt/apache/hdfs/journalnode  service:   nameNode:     type: NodePort     ports:       dfs: 9000       webhdfs: 9870     nodePorts:       dfs: 30900       webhdfs: 30870   nameNode1:     type: NodePort     ports:       webhdfs: 9870     nodePorts:       webhdfs: 31870   nameNode2:     type: NodePort     ports:       webhdfs: 9870     nodePorts:       webhdfs: 31871   dataNode:     type: NodePort     ports:       webhdfs: 9864     nodePorts:       webhdfs: 30864   resourceManager:     type: NodePort     ports:       web: 8088     nodePorts:       web: 30088   resourceManager1:     type: NodePort     ports:       web: 8088     nodePorts:       web: 31088   resourceManager2:     type: NodePort     ports:       web: 8088     nodePorts:       web: 31089   journalNode:     type: ClusterIP     ports:       jn: 8485     nodePorts:       jn: ""   hive:     metastore:       type: NodePort       port: 9083       nodePort: 31183     hiveserver2:       type: NodePort       port: 10000       nodePort: 30000  securityContext:   runAsUser: 9999   privileged: true5)开始部署# 更新 helm upgrade hadoop-ha ./hadoop -n hadoop-ha # 重新安装 helm install hadoop-ha ./hadoop -n hadoop-ha --create-namespace # 更新 helm upgrade hadoop-ha ./hadoop -n hadoop-ha
  NOTES NAME: hadoop-ha LAST DEPLOYED: Thu Sep 29 23:42:02 2022 NAMESPACE: hadoop-ha STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. You can check the status of HDFS by running this command:    kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-hdfs-nn-0 -- /opt/hadoop/bin/hdfs dfsadmin -report  2. You can list the yarn nodes by running this command:    kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-yarn-rm-0 -- /opt/hadoop/bin/yarn node -list  3. Create a port-forward to the yarn resource manager UI:    kubectl port-forward -n hadoop-ha hadoop-ha-hadoop-yarn-rm-0 8088:8088     Then open the ui in your browser:     open http://localhost:8088  4. You can run included hadoop tests like this:    kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-yarn-nm-0 -- /opt/hadoop/bin/hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.2-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 128MB -resFile /tmp/TestDFSIOwrite.txt  5. You can list the mapreduce jobs like this:    kubectl exec -n hadoop-ha -it hadoop-ha-hadoop-yarn-rm-0 -- /opt/hadoop/bin/mapred job -list  6. This chart can also be used with the zeppelin chart     helm install --namespace hadoop-ha --set hadoop.useConfigMap=true,hadoop.configMapName=hadoop-ha-hadoop stable/zeppelin  7. You can scale the number of yarn nodes like this:    helm upgrade hadoop-ha --set yarn.nodeManager.replicas=4 stable/hadoop     Make sure to update the values.yaml if you want to make this permanent.
  6)测试验证
  查看 kubectl get pods,svc -n hadoop-ha -owide
  测试 beeline -u jdbc:hive2://localhost:10000  -n admin  create database test;  CREATE TABLE  IF NOT EXISTS test.person_1 ( id INT COMMENT "ID", name STRING COMMENT "名字", age INT COMMENT "年龄", likes ARRAY COMMENT "爱好", address MAP COMMENT "地址" ) ROW FORMAT DELIMITED FIELDS TERMINATED BY "," COLLECTION ITEMS TERMINATED BY "-" MAP KEYS TERMINATED BY ":" LINES TERMINATED BY " ";
  7)卸载helm uninstall hadoop-ha -n hadoop-ha  kubectl delete pod -n hadoop-ha `kubectl get pod -n hadoop-ha|awk "NR>1{print $1}"` --force kubectl patch ns hadoop-ha -p "{"metadata":{"finalizers":null}}" kubectl delete ns hadoop-ha --force  rm -fr /opt/bigdata/servers/hadoop-ha/{nn,dn,jn}/data/data{1..3}/*
  git下载地址:https://gitee.com/hadoop-bigdata/hadoop-ha-on-k8s
  这里只是把hive相关的部分编排列出来了,有疑问的小伙伴欢迎给我留言,对应的修改也提交到git上了,有需要的小伙伴自行下载,hive的编排部署就先到这里了,后续会持续分享【云原生+大数据】相关的教程,请小伙伴耐心等待~

世事无常!张常宁父亲去世,3个月前她才晒照,那时父亲状态很好文篮郭先生兔年元宵节凌晨,一个排坛不好的消息袭来。那就是,江苏男排前任队长中国女排教练张晨在朋友圈发布讣告,父亲张友生,因病医治无效,于2023年2月5日1时50分在南京大学医学院经济大省挑大梁内陆大省河南开放动力强劲海淘网购订单不断,企业生产繁忙,海关轮班值守枢纽动能提速,特色产业蓬勃发展制度创新引领,持续塑造开放新优势新年伊始,内陆经济大省河南对外开放的步伐坚定向前。不沿边不靠海不临江,河南三盆水三瓶灭火器方见效!星星之火切勿大意着火啦!垃圾箱着火啦!日前,建国西路316弄某号二楼的吴兴妹正在洗碗,忽然闻到一股呛人的烟味,朝窗外张望时发现,对面垃圾箱房里火光冲天,已经卷起了滚滚浓烟!住在一楼的75岁老人孔祥国宝激情演奏,炸裂!视频加载中编者按科技的力量日新月异,文化的魅力历久弥新。这场音乐会不仅全国首创,还通过国宝文物和元宇宙技术的融合,连接过去现在与未来三重历史,跨越虚拟与现实两个世界,极具震撼感和创霍启刚因失去3千亿继承权,遭前任抛弃,郭晶晶为何甘愿接受他喜欢看电视剧的朋友们应该对豪门联姻并不陌生,但在我们看到的情节里嫁入豪门最后的结果大多并不如意。有些追求物质的姑娘认为,哪怕生活过的不那么如自己所愿,只要能把钱抓在自己手里那就是人烦恼产生于对未知的恐惧,产生于没有掌控感,用行动找回掌控感每个人都必然有着各自的人生烦恼,这本就是我们日常生活的一部分,但如果我们能够把它们处理好,烦恼也就不再成为烦恼,会成为助力我们更好生活的一部分但如果我们处理不好,烦恼就可能真正的影不信感情,只信前(qian),你就打败了身边90的人!感情固然重要,但是呢,老板富人都很清楚,感情爱情美色权力道德善良这些看不见摸不着但是非常有意义的东西,都是钞能力的附属品。拿追求感情的时间拿来追求金钱,感情就不请自来了,反之,如果这是我见过最优雅的退休阿姨,头发烫卷穿搭精致,优雅到骨子里若是将自己的全部心思都放在我今天长出了几条皱纹或者是我今年又长了一岁,那么日渐衰老的身体和不断增长的年龄,就会像是梗在喉咙里的鱼刺,反复折磨自己的精神。席郁兰就曾说我们总是关注那些读了苏轼5首元宵诗词,才知古人的浪漫刻在骨子里作者洞见ciyu人到佳节,感情最浓,思绪最长。正月十五,又是一年元宵节。705年,女皇武则天下令都城开放宵禁,燃放花灯五万盏庆祝元宵。当时,洛阳城万人空巷,彻夜狂欢,苏轼先祖苏味道中国篮球糟糕的网络环境,和自媒体写手真的无关吗?电影让子弹飞里,六子被胡万诬告后,为证自身清白不惜搭上了性命。这件事告诉我们,造谣简单但洗脱罪名难,尤其在国内,更是难上加难。为啥?因为造谣不算犯罪,很多时候,甚至都不用负法律责任MLB新闻书童克萧征战15载新球季有望首度季赛开箱绿色怪物大联盟在2023年球季更新赛制,虽同样会有162场比赛,不过本季每一队都会对战到因此原本分区的比赛数量将从76场减到52场。赛制的更新让新球季增加了一些看点,例如在同一主场能够看到
为了教育孩子,你都读过哪些书?我的小孩目前读大一,回想起跟孩子亲子教育的成长,阅读的书目包括以下两类一亲子教育经验的书。10多年前,这类书有名的包括好妈妈胜过好老师哈佛女孩刘亦婷一岁上常青藤人生规划在童年等这类怎样快速有效的给孩子戒掉尿不湿?我的宝宝正在戒尿布湿的阶段,有几点做法我觉得还很实用。首先,宝宝身体发育已经知道自己想要尿尿或者拉臭臭了,才可以开始戒尿布湿,不然会给宝宝造成困扰,不知如果是好。反而不容易戒掉。开怀孕后孕酮素低要吃什么了?估计是年龄30左右了吧!我媳妇怀孕时就是30岁了,孕前期孕酮低,孩子容易保不住。一直吃了3个月的黄体酮,最好去大医院看看!大医院的药副作用小,对大人孩子都好。这个最好是去医院检查,孕期防妊娠纹用什么好?1防妊娠纹最大名鼎鼎的是娇韵诗的身体护理油。洗澡后取少量涂于湿润皮肤上,可用于全身或集中于问题区域。尤其建议于怀孕期间使用,加强大腿腰腹部臀部胸部的按摩。为了达到好的效果,请每星期荣耀V10支持5Gwifi吗?荣耀V10是华为第四款全面屏机型,5。99寸屏幕,内屏圆角,钻雕工艺全金属一体机身宽度只有74。98毫米,相比荣耀V9收窄了2。52毫米,厚度也只有6。97毫米。最另类的是,不同于什么样的鞋子穿着舒服?如果是单纯从舒服的角度来说,那肯定是运动鞋啦!大小匹配有适合个人鞋垫的运动鞋肯定是最舒服的。别以为平底鞋会给你多大的舒适感,因为平底鞋对足弓没有足够的支撑,并且对足跟没有足够的缓冲宝宝多少周是足月?宝宝多少周是足月?宝宝37周(孕37周末)分娩就是足月。不过孕3741周内出生的宝宝都是正常足月新生儿。孕37周内出生的活产婴儿称为早产儿超过42周分娩的是过期儿,早产儿及过期儿的一位带着三个宝宝的妈妈,全职在家,中度抑郁,我该如何是好?如果要找专家,要谨慎一些,最好找对潜意识心理比较了解的心理专家。平时注意锻炼,饮食。现在已经证实打太极是有利于情绪康复的。还有就是创造有利于康复的社会支持体系,比如良好的家庭关系和小米手机是不是每款都逃不过发热问题?实在不想用了,太烫了?今年的骁龙888和骁龙888Plus手机的确是比较热,不过这种热是普遍的热,估计你换成其他品牌的也一样会感觉到很热。当然你也可以考虑换成骁龙7系列,或者联发科的产品,这些处理器的手一边喊着不买苹果,可是苹果销量却一天天的涨,你会买苹果手机吗?感谢您的阅读!为何很多人都说不买苹果手机,但是苹果手机的销量却一天天在涨?我发现一些很奇怪的事,很多人可能会说我不想去购买苹果手机,但是你会发现苹果手机的销量反而是在增长的。有些人做过腺样体肥大和扁桃体肥大手术的宝宝,术后恢复怎么样?我家男孩差俩月五岁,晚上有张嘴呼吸偶尔打鼾,看抖音很多说腺样体肥大,去郑州一附院检查,腺样体肥大,扁桃体左二度右三度,跑了郑州三家大医院都建议两样都切除,最后在郑州一附院做的,全麻