Support batched logs from the worker#4174
Conversation
|
Hey @stuartc - this is all I've done on the lightning so far for batch logging. Just enough for local testing really. There's a lot of work we could do here to improve. I don't know how much appetite we have? |
|
Hi @midigofrank - as Stu is going to be tied up until the end of the year, can you help me with this? CC @theroinaochieng |
|
Hey @josephjclark , I'll have a look |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4174 +/- ##
==========================================
+ Coverage 89.41% 89.45% +0.03%
==========================================
Files 425 425
Lines 20221 20245 +24
==========================================
+ Hits 18081 18110 +29
+ Misses 2140 2135 -5 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@josephjclark I have named the event |
|
Ok @midigofrank I've just updated this on the Worker side - works great with the new event I think we can merge this guy down now. |
Background
The production worker now supports batch logging (optionally).
What this means in short that instead of sending each log line straight to the
run:socketchannel, it'll save up a bunch of them and send a batch of logs in one go.This should reduce latency on log processing in the worker, and may eliminate even more lost runs.
Overview
This is a very simple, minimal PR which supports log events being set from the worker in batches.
It is designed to be backward compatible with batched and un-batched logs.
So basically the behaviour here is is simply:
logskey)logs, key, massage the payload into an array of 1 logThis PR does not do any batch uploading in the database. It probably makes sense to do this? But it feels beyond the scope of what I ought to be doing (with or without Claude's help)
I have not touched tests, but we should introduce a few tests against batched logs.
Closes #4123
AI Usage
Please disclose how you've used AI in this work (it's cool, we just want to know!):
You can read more details in our Responsible AI Policy