Awesome! I was really looking forward to feedback like this. Let me go through it point by point.
First was in this changeset where you got sick of writing new (FILE, LINE) everywhere.
I actually did not get sick of writing that π. But up until that point, I needed to make sure that all allocations (inside the runtime, and within example code) can be properly tracked to rule out memory leaks. For end users, requiring to specify file and line number would be too tedious (but they still can).
If you have a look into the function right above readSkeletonBinaryData()
called readSkeletonJsonData
, you'll see that RAII and stack allocation is not a problem. In fact, all classes in spine-cpp use RAII.
SkeletonData* readSkeletonJsonData (const String& filename, Atlas* atlas, float scale) {
SkeletonJson json(atlas);
json.setScale(scale);
SkeletonData* skeletonData = json.readSkeletonDataFile(filename);
if (!skeletonData) {
printf("%s\n", json.getError().buffer());
exit(0);
}
return skeletonData;
}
I've refactored the SFML example to use a style closer to C++11, including poor man's std::make_unique
replacement (which is C++14, not C++11). See spine-runtimes/main.cpp at 3.7-beta-cpp
This refactored example code does not have any new
or delete
invocations (except for the make_unique
emulation). User side, it is thus possible to use the spine-cpp API together with your preferred memory management strategy, be it smart pointers, or more traditional means of shooting yourself in the foot. Stack allocation also works as intended due to all spine-cpp classes using RAII.
At first I thought perhaps you were trying to replicate boxing/unboxing like in C# or Java
Eh, I'm not that deep into JVM stuff. I'm especially not trying to replicate one of the worst features of it in a language with proper value types π.
SpineString really got me though.. were you trying to avoid STL? Weird because it #includes the <string> STL header...
That should be #include <cstring>
actually! Good catch. Read on for reasons to have our own string and vector implementations.
Then it occurs to me that you are taking Herculean efforts to avoid memory leaks and allow for 3rd party memory leak tracking. Cheers for that. We really appreciate the attention to detail.
TL;DR: Forbid the use of new/delete in cpp version. Use C++11 instead. Then, drop memory leak tracking in spine-cpp.
Taking a step back, I think these are the questions that we actually need to answer:
1. Does switching to smart pointers in the internal code and public APIs of spine-cpp increase maintainability and better abstract ownership semantics?
I do not believe that is the case. All of spine-cpp's code follows RAII, so ownership semantics are pretty clear, and memory management is quite simple as well. As illustrated with my SFML refactoring above, user code can decide to use smart pointers or avoid them. Where we to use smart pointers in the public API of spine-cpp, we'd force everyone to use them. A lot of our customers
(and friends in the industry who I've passed some design decisions by) are averse toward smart pointers, based on a history of projects extensively using them, only to find out that it's still to easy to write buggy memory management and also observe some performance trade offs due to the compiler not being sufficiently smart to inline/eliminate most of the magic.
So, given that the public API of spine-cpp allows end users to choose their poison, and given that ownership semantics in the internal code are straight forward, I would want to keep that aspect as is.
2. Is there a benefit of allowing end users to plug in their own memory allocators and trackers?
Smart pointers are great most of the time, but do not prevent you from shooting yourself in the foot. Once you have shot yourself in the foot, being able to easily track down what exactly killed you is important.
The "herculian" task was straight forward affair of creating a handful of sensible new
/delete
pairs, making sure (potentially) heap allocated types derive from it, and annotating any internal allocations to track the file and line number where the allocation happened. This has helped tremendously in ensuring 1. we are as allocation free in inner loops as possible and 2. there are no leaks.
You can do leak detection with valgrind and sanitizers that come with Clang/MSVC++, and Instruments (and I think VS now too) allows allocation hotspot detection at runtime. However, these tools are not available on each and every platform (screw you Android).
The answer to this question in my opinion is thus: yes, providing pluggable allocation and tracking is indeed valuable to both us as the maintainers as well as end users who might target platforms we haven't even thought about (one such case being set-top boxes!). Having to annotate allocations inside spine-cpp is a very minor price to pay for that in my opinion.
3. Does switching from our custom SpineString
and Vector
to std::string
and std::vector
make custom memory allocation and tracking harder?
I believe the answer to this is a yes as well. Our custom allocation scheme is very simple to plug into and does not affect the existing code base, whereas allocators for collections and other STL classes have to be specified at the declaration site of a variable/member. It would also be impossible to track allocation sites as we do now, as we are not in control of the internal code of STL classes.
4. Does switching to std::string
and std::vector
have a performance impact?
In general, switching to std::vector
would probably have no perceivable performance impact compared to our current default Vector
implementation. However, end users can switch out the allocation strategy more easily as outlined in 3. and thus better combat fragmentation, do region management, or perform other cache friendliness optimizations more easily. Specifically in SkeletonJson and SkeletonBinary, where time lines and meshes are allocated who can benefit from a simple bump allocator.
A naive example: during development, you can use the default allocator and track the change in required memory around a loading call to SkeletonBinary or SkeletonJson. You can exactly figure out how big of a memory block is needed to store all the loaded data and switch to a bump allocator for production, increasing cache coherency and thus performance. The alternative solution for better cache coherence would be to completely rewrite the object graph model of the Spine API, something that we'll likely not do anytime soon, especially if such a simple solution exists that's generally applicable irrespective of the chosen object graph model.
In case of std::string
, the answer (based on our current code-base) is yes. std::string
instances have a tendency to multiply like rabbits when being passed down or constructed from plain old char*
, const &
not withstanding. This is especially observable in loading code like SkeletonBinary and SkeletonJson. The custom String
implementation allows us to have move semantics on the backing char*
, and thus lets us avoid some unnecessary allocations at load time.
I hope my justification for design decision doesn't come of bad. I REALLY appreciate that you took the time to write up your feedback, and I hope my reply shows that I sincerely considered your proposal. Based on my analysis above, I do however believe that we currently have a pretty good balance that should allow all our users to use spine-cpp in the way the are most comfortable with.