Skip to main content

Starting a Conversation

This feature is still under development, but we’d love you to try it.
All content is generated by AI. Please verify the output carefully.

Guanyun AI combines state-of-the-art large language models (LLMs) with Guanyun Platform. With AI-powered analysis, it provides deeper incident insights and faster resolution guidance, helping you identify key issues behind system behavior and solve technical problems more efficiently.

Using the AI Interface

Starting a Conversation

  • Option 1: Log in to Guanyun Platform, open the landing page, enter your question in the AI input box, and press Enter to open the Guanyun AI chat panel.

落地页界面

  • Option 2: On any feature page, click the Guanyun AI icon in the upper-right corner to open the chat panel.

观云AI图标所在位置

Viewing Historical Conversation

Click Historical Conversation List. On the Conversation History page, you can delete or search past conversations, or resume any previous thread.

会话历史

Starting a New Conversation

Click New Conversation to start a fresh Conversation and reduce answer drift caused by overly long context.

新建

Introducing Q&A Features

Knowledge Base Q&A

With knowledge base training, Guanyun AI can communicate with you in natural language. Describe your issue in the chat box, and Guanyun AI will automatically match relevant content and provide recommended steps and reference documents.

Try asking:

  • Inspect all applications
  • How to deploy UniAgent on a Linux server
  • What is the purpose of a full-stack snapshot

Current supported scenarios:

  1. Operations knowledge Q&A (for example, Oracle and MySQL administration, troubleshooting, and tool usage).
  2. Natural-language metric queries across APM, RUM, and host metrics.
  3. AI-assisted analysis for response time, errors, and other signals across applications, instances, and requests.
  4. Script generation based on your requirements (Shell scripts).
  5. In-product documentation Q&A for platform features.

Example:

知识库问答

One-Click Metric Analysis

Guanyun AI identifies anomalies in key metrics for entities (Systems / Applications / Application Instances / Requests) through intelligent baseline detection. It then triggers deep diagnostics based on the metric type, enabling a closed-loop workflow from detecting issues to identifying root cause. There are two entry points:

  • Entry 1: On an entity details page, in a metric trend chart, click ··· in the upper-right corner and select AI Analysis.

指标分析1

  • Entry 2: Select a time range in a metric trend chart and click AI Analysis.

指标分析2

AI-Assisted Analysis

Guanyun AI provides end-to-end performance analysis across multiple entity dimensions, including applications, instances, requests, hosts, and databases. It combines real-time monitoring with AI-assisted analysis to help you locate performance bottlenecks and anomalies faster. You can ask questions in the following scenarios:

  • 1. Inspecting Entities: Open the entity list (Systems / Applications / Instances / Requests / Databases / Hosts) and click the Guanyun AI icon in the upper-right corner.

巡检实体1

巡检实体2

  • 2. One-Click Analysis: In the entity list (Systems / Applications / Instances / Requests / Distributed Tracing), click the Guanyun AI icon next to an entity name (the icon is hidden by default and appears on hover).

一键分析

  • 3. Deep Analysis: On an entity details page, click the Guanyun AI icon in the upper-right corner. Guanyun AI can recognize the entity on the page and run metric analysis or event queries.

页面实体识别

The following are three example scenarios for continuous Q&A:

Scenario 1: Application

IntentQuestionAnswer
Understand the overall health of monitored applicationsInspect all applicationsReturns the top applications with anomalies in key indicators (for example, response time, error rate, throughput, and Problems).
Investigate an abnormal indicator for Application AWhy is Application A slowAnalyzes response time from the perspectives of instances and requests.
Drill down into a specific instance (Instance B)Why is Instance B slowAnalyzes abnormal requests under this instance and provides likely root cause.
Drill down into a specific request (Request C)Why is Request C slowAnalyzes the trace for this request and provides likely root cause.

Scenario 2: Host

IntentQuestionAnswer
Understand the current status of monitored hostsWhich hosts are monitored by the platformReturns the top 10 monitored hosts and key indicators (for example, CPU usage and memory usage).
Investigate a specific indicator on Host AWhy is Host A CPU usage highReturns the top processes contributing to high CPU usage on Host A.

Scenario 3: Database

IntentQuestionAnswer
Identify databases with high error ratesWhich databases have high error ratesReturns the top databases with high error rates.
Investigate a specific database (Database A)Why is Database A error rate high or Which SQL statements have high error rates in Database AAnalyzes error types, operations, and stack information, and provides likely root cause.

单一数据库

  • 4. RUM Crash Analysis: In Real User Monitoring, open Crash Details and click View AI Solution.

RUM

FAQ

  • Nacos Configuration

    In on-premises releases, the LLM-related configuration is not included by default. If you ask a question without configuring these services, you will see: LLM, Embedding, and Rerank services are not configured. This feature is unavailable. Contact technical support to configure the required model services.