Had a customer recently ask about to disaggregate a {{Splunk}} search that had aggregated fields because they export to CSV horribly.
Here’s the thing.
You can’t disaggregate aggregated fields.
And there’s a Good Reason™, too: aggregation, by definition, is a one-way street.
You can’t un-average something.
Average is an aggregation function.
So why would you think you could disaggregate any other {{Splunk}} aggregation operation (like values or list)?
You can’t.
And you shouldn’t be able to (as nice as the theoretical use case for it might be).
So what is a body to do when you have a use case for a clean-to-export report that looks as if it had been aggregated, but every field in each row cleanly plunks-out to a single comma-separated value?
Here’s what I did:
{parent search} | join {some field that'll exist in the subsearch} [ search {parent search} | stats {some stats functions here} ] | fields - {whatever you don't want} | sort - {fieldname}
What does that end up doing?
The subsearch is identical to the outer search, plus whatever filtering/where
/|stats you might want/need to do.
Using the resultant, filtered set, join on a field you know will be unique [enough].
Then sort
however you’d like, and remove whatever fields
you don’t want in the final display.
Of course, be sure your subsearch will complete in under 60 seconds and/or return fewer than 10,000 lines (unless you’ve modified your {{Splunk}} limits.conf
)