I've just finished coding some classical divide-and-conquer algorithms, and I came up the following question:(more for curiosity)
Admittedly, in many cases, divide-and-conquer algorithm is faster than the traditional algorithm; for examples, in Fast Fourier Transform, it improves the complexity from N^2 to Nlog2N. However, through coding, I found out that, because of "dividing", we have more subproblems, which means we have to create more containers and allocate more memories on the subproblem additionally. Just think about this, in merge sort, we have to create left and right array in each recursion, and in Fast Fourier Transform, we have to create odd and even array in each recursion. This means, we have to allocate more memories during the algorithm.
So, my question is, in reality, such as in C++, does algorithms like divide-and-conquer really win, when we also have to increase the complexity in memory allocation? (Or memory allocation won't take run time at all, and it's cost is zero?)
Thanks for helping me out!