Linq is great for its compact expressions, and for its -- usually efficient -- lazy-evaluation. These two virtues seem to be spit upon by the Aggregate function.
Say you want to aggregate a list of ints.
var s = new int[] {1, 2, 3};
int sum = ints.Aggregate((a,b) => a + b);
This works great if your list has elements in it, or they aren't filtered out in a where clause. Say you had a function that took a list of ints and someone passed that list of 1,2,3 to your function:
int SumOverTen(IEnumerable<int> ints)
{
int agg = ints.Where(i => i > 10).Aggregate((a,b)=> a+b);
return agg;
}
Everything now goes pear-shaped and you get an InvalidOperationException "Sequence contains no elements".
So what do you do to fix this? You could rewrite your function to check for an empty result first:
int SumOverTen(IEnumerable<int> ints)
{
int agg = 0;
var bigints = ints.Where(i => i > 10);
if(bigints.Count() > 0)
agg = bigints.Aggregate((a,b)=> a+b);
return agg;
}
This makes a nice Linq expression into a nasty branching mess. The list will now be enumerated twice, once for Count and once for Aggregate.
Well, it turns out that there is a better way, using Linq:
int SumOverTen(IEnumerable<int> ints)
{
int agg = ints.Where(i => i > 10)
.DefaultIfEmpty()
.Aggregate((a,b)=> a+b);
return agg;
}
This results in the same value as the previous function but is much more legible and throws no exceptions on an empty list. However, I don't know how DefaultIfEmpty checks for an empty list so it may not be more efficient than the previous code.
What I want to know is, why does Aggregate throw instead of returning the same result as using DefaultIfEmpty?