Configuration
Collector Scaling
By default, only a single Collector instance is deployed. When the existing Collector resources become insufficient to handle the monitoring workload, scaling is required.
Prior to scaling, ensure that the cluster has sufficient free resources to meet the minimum requirements for the additional Collectors: each Collector requires 2 CPU cores and 8 GB of memory.
Recommended Method
Scale the Collector by modifying the replicas count for tingyun-collector in the tingyunagent.yaml file used for Agent installation.
---
# Required - collector
# collector StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: tingyun-collector
namespace: tingyun
spec:
replicas: 1 # Modify this value
serviceName: tingyun-collector
The, napply the updated YAML file.
kubectl apply -f tingyunagent.yaml
Command Line Method
If the original installation YAML file is no longer available, use the following command to scale the Collector.
kubectl edit StatefulSet tingyun-collector -n tingyun
After entering the above command, you will see a screen displaying the following content. Locate and modify the replicas value for tingyun-collector, then save and exit.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: tingyun-collector
namespace: tingyun
resourceVersion: "29435"
uid: e078cf79-ee84-43cd-a3fb-a9d6efe9c878
spec:
podManagementPolicy: OrderedReady
replicas: 1 # Modify this value
Enabling APM Aggressive Mode
On the Deployment Status page, navigate to the UniAgents Operation tab and click Create. In the configuration page that appears, select Kubernetes as the deployment environment. Within the Deploy section, click Set Custom Options to expand additional settings. Then, click the APM Aggressive Mode button to enable APM aggressive mode.
Enabling Embedding Mode with Custom Matching Rules
By default, you must first label the Namespace and then label the Pods to embed the Agent into the target Pods.
However, in some customized Kubernetes management platforms, there is no means to label specific Pods. In such cases, you can instrument specific Pods by customizing the Webhook's object selector.
First, follow the steps for Enabling APM Aggressive Mode by setting apm_aggressive to true in the tingyun-common-config within the tingyunagent.yaml file.
Then, modify the objectSelector of the tingyun-agent-webhook. The matching rules for the objectSelector can be adjusted based on your actual requirements.
---
# APM Agent - Webhook, requires cluster administrator privileges to install
# register webhook
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: tingyun-agent-webhook
labels:
app: tingyun-webhook
webhooks:
- name: tingyun-webhook-service.tingyun.svc
namespaceSelector:
matchLabels:
tingyun-injection: enabled
objectSelector: # Added part
matchExpressions: # Added part
- key: app # Modify here
operator: In # Added part
values: # Added part
- "Match Value 1" # Modify here
- "Match Value 2" # Modify here
Then, apply the updated YAML file.
kubectl apply -f tingyunagent.yaml
Modifying Agent Configuration Files in Pods
By default, Agent configuration files for all Pods are the same, as these files are copied from the original Agent image. During Pod initialization, the Agent provides a shell script interface. You can modify Agent configuration files by editing the pod-init.sh script in the ConfigMap.
Configuration changes take effect dynamically but with a certain delay. This delay depends on the kubelet startup parameter --sync-frequency, which defaults to 1 minute. Therefore, after updating the ConfigMap content, you need to wait approximately 2 minutes.
Recommended Method
Modify the script by modifying the YAML file used during Agent installation.
Modify pod-init.sh within tingyun-injector-config in the tingyunagent.yaml file.
# APM Agent - Agent configuration file
# configmap.yaml
# tingyun-agent configuration items
apiVersion: v1
kind: ConfigMap
metadata:
name: tingyun-injector-config
namespace: tingyun
data:
pod-init.sh: |
#!/bin/sh
modifyConfig() {
sed -i -e "/$2=/d" "$1"
echo "$2=$3" >> "$1"
grep "$2=" "$1"
}
echo "TINGYUN_APP_NAME=${TINGYUN_APP_NAME}"
if [ "${TINGYUN_APP_NAME}" = "www.tomcat.com(test)" ]; then
modifyConfig "${JAVA_AGENT_CONF}" "agent_log_level" "debug"
fi
Then, apply the updated YAML file.
kubectl apply -f tingyunagent.yaml
Command Line Method
If the original installation YAML file is no longer available, use the following command to modify the script.
kubectl edit configmap tingyun-injector-config -n tingyun
This will open the ConfigMap interface, allowing you to modify the the pod-init.sh script.
Available environment variables in the script include:
| Variable Name | Corresponding File |
|---|---|
| ONEAGENT_CONF | oneagent.conf |
| JAVA_AGENT_CONF | java.conf |
| PHP_AGENT_CONF | php.conf |
| NETCORE_AGENT_CONF | netcore.conf |
| NGINX_AGENT_CONF | nginx.conf |
| PYTHON_AGENT_CONF | python.conf |
| NODEJS_AGENT_CONF | nodejs.conf |
| GO_AGENT_CONF | go.conf |
| BLACK_LIST_CONF | blacklist.txt |
| INTERCEPTOR_CONF | interceptor.conf |
For example, to enable debug logs for Java Agent in a Pod with Namespace name "test" and Deployment name "mysite", configure as follows:
pod-init.sh: |
#!/bin/sh
modifyConfig() {
sed -i -e "/$2=/d" "$1"
echo "$2=$3" >> "$1"
grep "$2=" "$1"
}
if [ "${TINGYUN_APP_NAME}" = "mysite(test)" ]; then
modifyConfig "${JAVA_AGENT_CONF}" "agent_log_level" "debug"
fi
For example, to enable debug logs for Java Agent in all Pods, configure as follows:
pod-init.sh: |
#!/bin/sh
modifyConfig() {
sed -i -e "/$2=/d" "$1"
echo "$2=$3" >> "$1"
grep "$2=" "$1"
}
modifyConfig "${JAVA_AGENT_CONF}" "agent_log_level" "debug"
For example, nginx Agent application naming defaults to server_name:port. If you want to change the nginx Agent application name to the same naming format as Java applications: workload(namespace), configure as follows:
pod-init.sh: |
#!/bin/sh
modifyConfig() {
sed -i -e "/$2=/d" "$1"
echo "$2=$3" >> "$1"
grep "$2=" "$1"
}
modifyConfig "${NGINX_AGENT_CONF}" "naming_mode" "5"
modifyConfig "${NGINX_AGENT_CONF}" "host_ip" "${TINGYUN_APP_NAME}"
Viewing pod-init.sh Logs During Pod Startup
The pod-init.sh logs are located within the logs of the Pod's init container.
kubectl logs [pod name] -c tingyun-oneagent -n [application namespace]
How Does the Agent's pod-init.sh Script Ensure It Does Not Damage Customer Application Pods?
The Container where pod-init.sh runs is not the Container where the customer application runs, but an independently started Container. No matter what destructive scripts are run, including rm -rf /, they all run in an independent Container, equivalent to a security sandbox, and will not affect the customer application.
The pod-init.sh script is the only entry point for modifying the Agent. After the script runs, the Agent and configuration files are copied to the customer application Pod.
Limiting the Number of Pod Injections within a Single Workload
The limiting feature is only supported in Kubernetes 1.15 and above.
By default, if injection is enabled for a specific Workload, all Pods under that Workload will be injected. The Agent provides an optional injection quantity limiting feature. When this feature is enabled, only a specified number of Pods will have the Agent injected.
The feature works by reading the Pod list from the API Server and checking Pod labels to determine whether injection has been performed. Therefore, injection time and resource consumption on the Webhook will be higher when the limit is enabled compared to when there is no limit.
Configuration changes take effect dynamically but with a certain delay. This delay depends on the kubelet startup parameter --sync-frequency, which defaults to 1 minute. Therefore, after updating the ConfigMap content, you need to wait approximately 2 minutes.
Recommended Method
Enable by modifying the YAML file used during Agent installation.
Modify policy.yaml within tingyun-injector-config in the tingyunagent.yaml file, and change limit: disabled to limit: enabled.
maxcountPerWorkload limits the maximum number of injections per Workload. If you need to customize the injection quantity for specific Workloads, modify the limiting rules rules. The definition is as follows:
- namespace: The Namespace where the Workload is deployed.
- kind: The type of Workload. Common types: deployment, daemonset, statefulset.
- name: The name of the Workload.
- max: The maximum number of Pod injections in the Workload.
For example:
# APM Agent - Agent configuration file
# configmap.yaml
# tingyun-agent configuration items
apiVersion: v1
kind: ConfigMap
metadata:
name: tingyun-injector-config
namespace: tingyun
data:
policy.yaml: |
limit: enabled
maxcountPerWorkload: 99
rules:
- namespace: test
kind: deployment
name: www.tomcat.com
max: 1
- namespace: test
kind: daemonset
name: www.tomcat.com
max: 1
- namespace: test
kind: statefulset
name: www.tomcat.com
max: 1
Then, apply the updated YAML file.
kubectl apply -f tingyunagent.yaml
Command Line Method
If the original installation YAML file is no longer available, use the following command to enable the feature.
kubectl edit configmap tingyun-injector-config -n tingyun
You will see an interface similar to the following. Change limit: disabled to limit: enabled in policy.yaml, add your limitation rules, then save and exit.
apiVersion: v1
data:
policy.yaml: |
limit: enabled
maxcountPerWorkload: 99
rules:
- namespace: test
kind: deployment
name: www.tomcat.com
max: 1
- namespace: test
kind: daemonset
name: www.tomcat.com
max: 1
- namespace: test
kind: statefulset
name: www.tomcat.com
max: 1
Pulling Specific Version Agent Images for Specific Workloads
The feature of pulling specific version Agents is only supported in Kubernetes Agent 2.4.1.0 (injector image version 2.4.3.0) and above.
By default, the Agent version embedded in Workloads is the image version specified in the initContainers section of tingyun-injector-config within the tingyunagent.yaml file.
For example, in the following configuration, the default Agent version pulled is 2.3.0.0, from the repository goinline.cn/tingyunagent/oneagent:2.3.0.0.
apiVersion: v1
kind: ConfigMap
metadata:
name: tingyun-injector-config
namespace: tingyun
data:
mutate.yaml: |
initContainers:
- name: tingyun-oneagent
image: goinline.cn/tingyunagent/oneagent:2.3.0.0
imagePullPolicy: IfNotPresent
When you need to pull specific version Agent images for specific Workloads, modify the labels in the Workload YAML and set tingyun-oneagent-version to the specified version number.
Note: If pulling a specific version from a private image repository, make sure this version exists in the image repository. Otherwise, the Workload will fail to start.
Taking a Deployment deployed with Tomcat as an example, configure as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo1
namespace: default
spec:
selector:
matchLabels:
app: demo1
template:
metadata:
labels:
tingyun-agent-injected: "true"
tingyun-oneagent-version: "2.3.2.0" # Specify Agent version as 2.3.2.0
spec:
containers:
- name: demo1
image: tomcat
With tingyun-oneagent-version specified as 2.3.2.0. This Workload will pull the Agent image goinline.cn/tingyunagent/oneagent:2.3.2.0.
Injector image version 2.4.6.0 and above supports specifying Agent images for specific Workloads in the Agent's policy.yaml. The configuration method is as follows:
Modify the ConfigMap named tingyun-injector-config in the Kubernetes Agent, and specify specific Workloads to load corresponding Agent images in the agentVersions list in policy.yaml.
The appName format is workload name (Namespace name), and oneagentTag is the Tag name of the oneagent image.
For example, with the following configuration, the Workload named busybox will pull the Agent image oneagent:2.4.3.0.
apiVersion: v1
kind: ConfigMap
metadata:
name: tingyun-injector-config
namespace: ${NameSpace}
data:
policy.yaml: |
agentVersions:
- appName: busybox(test)
oneagentTag: 2.4.3.0
Note: If pulling a specific version image from a private image repository, make sure this version exists in the image repository. Otherwise, the Workload will fail to start.
Obtaining Application Names from Business Pod Labels
Only supported in Kubernetes Agent 2.4.1.0 (injector image version 2.4.3.0) and above.
By default, the application name is the Workload name. However, on some Kubernetes-based customized platforms, when deploying through web pages, the Workload name will be appended with version numbers, dates, etc., causing the application name to change every time it is redeployed. In this case, it might be necessary to use a specific label from the Pod as the application name. The configuration method is as follows:
Modify the YAML of tingyun-agent and add the environment variable NAMING_BY_LABEL with the value being the specific label name in the Pod.
For example:
Describe a business Pod as follows. You need to obtain applicationName in the labels as the application name:
kubectl describe pod boe-fcc-expense-v1-1439-6349f45-lzdx6
Start Time: Wed, 12 Apr 2023 10:22:03 +0800
Labels: applicationName=boe-fcc-expense
versionName=boe-fcc-expense-v1-1439
pod-template-hash=856c4586d5
The configuration method is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tingyun-agent
namespace: tingyun
labels:
app: tingyun-agent
spec:
replicas: 1
selector:
matchLabels:
app: tingyun-agent
template:
metadata:
labels:
app: tingyun-agent
spec:
containers:
- name: tingyun-agent
image: tingyunagent/injector:2.4.3.0
imagePullPolicy: Always
args:
- -commonCfgFile=/etc/tingyun/conf/tingyun-common.yaml
- -mutateCfgFile=/etc/tingyun/injector/mutate.yaml
- -podInitFile=/etc/tingyun/injector/pod-init.sh
- -policyFile=/etc/tingyun/injector/policy.yaml
- -tlsCertFile=/etc/tingyun/certs/cert.pem
- -tlsKeyFile=/etc/tingyun/certs/key.pem
- -version=v1
env:
- name: SECURITY_CONTEXT_ENABLED
value: 'false'
- name: NAMING_BY_LABEL # Add environment variable NAMING_BY_LABEL
value: 'applicationName' # Add environment variable value as applicationName
Modifying Application Name Format
Only supported in Kubernetes Agent 2.5.3.0 (injector image version 2.6.0.0) and above.
Set a custom name format by modifying the environment variable APP_NAMING_RULE. The following variable replacements are currently supported:
REPLACE_WITH_WORKLOAD_NAME: Will be replaced with Workload name
REPLACE_WITH_NAMESPACE: Will be replaced with Namespace name
If the environment variable APP_NAMING_RULE is not defined, the default value is REPLACE_WITH_WORKLOAD_NAME(REPLACE_WITH_NAMESPACE)
For example:
In a test environment, to modify the application name format to WorkloadName-dev(NamespaceName), configure as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tingyun-agent
namespace: tingyun
labels:
app: tingyun-agent
spec:
replicas: 1
selector:
matchLabels:
app: tingyun-agent
template:
metadata:
labels:
app: tingyun-agent
spec:
containers:
- name: tingyun-agent
image: tingyunagent/injector:2.6.0.0
imagePullPolicy: Always
args:
- -commonCfgFile=/etc/tingyun/conf/tingyun-common.yaml
- -mutateCfgFile=/etc/tingyun/injector/mutate.yaml
- -podInitFile=/etc/tingyun/injector/pod-init.sh
- -policyFile=/etc/tingyun/injector/policy.yaml
- -tlsCertFile=/etc/tingyun/certs/cert.pem
- -tlsKeyFile=/etc/tingyun/certs/key.pem
- -version=v1
env:
- name: SECURITY_CONTEXT_ENABLED
value: 'false'
- name: APP_NAMING_RULE # Add environment variable APP_NAMING_RULE
value: 'REPLACE_WITH_WORKLOAD_NAME-dev(REPLACE_WITH_NAMESPACE)'
Mapping Application Namespace Names to Tingyun Business System Names
Only supported in Kubernetes Agent 4.2.0 (injector image version 4.0.1) and above.
Set a custom name format by modifying the environment variable TINGYUN_DEFAULT_BUSINESS_SYSTEM in mutate.yaml. The following variable replacement is currently supported:
REPLACE_WITH_NAMESPACE: Will be replaced with application Namespace name
For example:
mutate.yaml: |
initContainers:
- name: tingyun-oneagent
env:
- name: TINGYUN_DEFAULT_BUSINESS_SYSTEM
value: REPLACE_WITH_NAMESPACE # Replace with application Namespace name
Modifying Collector Configuration Files
Modify collector.properties with tingyun-collector-config in the tingyunagent.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: tingyun-collector-config
namespace: tingyun
data:
collector.properties: |
#
# Listening port for the Collector service.
# Default value: 7665
collector.listen=7665
#
# Whether to send data to DC via HTTPs.
# When dc.ssl is set to true, Collector will send data to DC via HTTPs
# Default value: false
dc.ssl=true
#
# Whether to enable data audit mode.
# This setting is dynamic. You do not need to restart Collector to change it.
# When audit_mode is set to true, all data entering and leaving Collector will be recorded in log files.
# This feature consumes CPU. This configuration should be disabled in production environments
# Default: false.
collector.audit_mode=false
Then, apply the updated YAML file.
kubectl apply -f tingyunagent.yaml
Rebuild the Collector pod.
kubectl delete pod tingyun-collector-0 -n tingyun
Modifying Collector IP
Only supported in Kubernetes Agent 2.4.3.0 (Collector image version 3.6.6.3) and above.
When deploying Collector in a Kubernetes cluster, Collector uses the Pod domain name (tingyun-collector-0.tingyun-collector-service.tingyun) as the address for communicating with Agents by default.
When the domain name mechanism in the Kubernetes cluster does not work properly, you can configure the Collector Pod environment variable TINGYUN_POD_IP to a specified IP. Collector will use the IP address specified in the environment variable TINGYUN_POD_IP as the address for communicating with Agents.
Modify the StatefulSet description named tingyun-collector in tingyunagent.yaml:
Delete the env variables of tingyun-collector: TINGYUN_SERVICE_NAME, TINGYUN_POD_NAME, and TINGYUN_POD_NAMESPACE
Add the env variable of tingyun-collector: name is TINGYUN_POD_IP and content is status.podIP
The modified example is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: tingyun-collector
namespace: tingyun
spec:
replicas: 1
serviceName: tingyun-collector-service
selector:
matchLabels:
vendor: tingyun
component: collector
template:
metadata:
labels:
vendor: tingyun
component: collector
name: tingyun-collector
spec:
containers:
- name: agent-collector
image: tingyunagent/collector:3.6.6.3
resources:
limits:
cpu: "4"
memory: "16Gi"
requests:
cpu: "2"
memory: "8Gi"
env:
- name: RUN_IN_CONTAINER
valueFrom:
configMapKeyRef:
name: tingyun-collector-config
key: run_in_container
- name: TINGYUN_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: config-vol
mountPath: /tmp/cfg.yml
subPath: cfg.yml
- name: config-vol
mountPath: /opt/tingyun-collector/infra/log4j.yml
subPath: log4j.yml
- name: config-vol
mountPath: /opt/tingyun-collector/infra/rules.yml
subPath: rules.yml
- name: config-vol
mountPath: /tmp/collector.properties
subPath: collector.properties
- name: config-vol
mountPath: /opt/tingyun-collector/apm/ops-env.sh
subPath: ops-env.sh
- name: config-vol
mountPath: /opt/tingyun-collector/infra/component/tingyun-env.sh
subPath: tingyun-env.sh
- name: config-vol
mountPath: /opt/tingyun-collector/infra/component/oracle_exporter/oracle-env.sh
subPath: oracle-env.sh
- name: common-config-vol
mountPath: /etc/tingyun/conf
readOnly: true
volumes:
- name: config-vol
configMap:
name: tingyun-collector-config
- name: common-config-vol
configMap:
name: tingyun-common-config