By collecting real-time sensor data, batch metadata, past improvement cases, domain rules, and operation history through multi-channels at the time of event triggering, and injecting only the core context into the LLM through priority queue, multi-level summary, and token budget management, we optimize response speed and token cost.