Most developers know of Core Animation through its few key classes, such as
CAAnimation, and their subclasses. Very few need to venture past this realm to take advantage of this powerful framework. What's not very clear to most is how many faces the framework takes on; there are three:
CoreAnimationCF, and the internal C++ CoreAnimation underpinnings.
All three are contained within the single
QuartzCore.framework - I've managed to recreate a majority (if not all) the private headers for the first two (C++ is a lot harder unfortunately), and I suggest the reader take a peek at them first over here. For the third facet, I've produced a list of some, but not all, the packages (namespaces) being used, and I suggest the reader take a look at those as well.
Let's start with the first one: the "normal"
CoreAnimation - the one that runs atop Objective-C and is the only publicly marked API for any App Store apps. The easiest way to begin using it is to link AppKit or UIKit and
UIView will take care of the rest for you. You just need to interface your view's layer, animate properties as needed, and so on. There is one small indexing trick used internally called
CAAtom -- using
CAInternAtom, you can convert between a key path (string) and an indexed id used internally. In addition to atoms, the
CAObject_* family of functions (
encodeWithCoder/CAMLWriter, etc) are used within
CALayer and friends to keep track of arbitrary values for arbitrary keys. This is why you're able to set any layer keyPath and it won't throw a
valueForUndefinedKey: exception. Past this, there's not much else to see here that isn't public.
The interesting stuff starts with
CoreAnimationCF: it's a barebones version of the above API... all in pure C using the
CoreFoundation library only. You've got contexts, layers, rendering, and animations. If you haven't already, take a look at the source code above. Why does this exist at all? Because Core Animation actually is cross-platform (along a few other Apple libraries, including Core Graphics)! WebKit and iTunes, for example, have existing DLLs for all of these frameworks, but since they don't rely on Objective-C, they use the CF flavor of this API. Should a macOS/iOS developer be using this API? Really, probably not - there's nothing you can't do in the normal API that
CoreAnimationCF will help you with. You'll also notice, some API are missing, like
The final facet is the most interesting, and is pretty much a mystery to me (and remains undocumented by anyone else, AFAIK) -- the C++ API. Through one ObjC protocol (
CARenderValue), all of the ObjC API (that is, the first facet) can be translated into the C++ API by calling
CA_copyRenderValue). If you haven't already, take a look at the list of packages above, because you'll notice some striking similarities. My cursory understanding of this API is that, the render server (be it a background thread or a separate process) copies the context and its layer tree's render values and can encode/decode them privately, relieving the ObjC/developer-facing API of any misuse or unexpected results. All of the
CA::Render:: packages/classes correspond to an ObjC layer or animation class, and once packaged up and sent over, the render server would directly manipulate these entities using the
CA::OGL:: packages/classes via a
SW (software) renderer,
OGL (OpenGL), or
Metal renderer (which is likely used on all Apple platforms). In the middle, however, is
CA::CG::, which looks like a lot of drawing routines that resemble Core Graphics (that is,
CGContext works, is that it has an internal
CGGState stack, and a current
GState (top of the stack), that all its clients set and manipulate via draw calls, but under the hood,
CGContextDelegate translates these calls into a specific surface.
CGSWindow has a
CGContextWindowDelegate that when a
NSWindows (or their non-layer-backed views) need to draw, is passed as the delegate to
CGWindowContextCreate and handles this translation layer. Similarly, a
CALayer likely creates its context using the
CA::CG:: packages as a delegate, allowing the draw calls and GState modifications to map into whatever renderer is being currently used.
Finally, there's one thing about both the internal C++ and public ObjC API that not many folks have documented or picked up on:
CATransaction's commit handlers and
CAContext. Every process that needs to work with a layer (or more) requires at least one
CAContext - this is where the root layer is hosted. You can create additional contexts, remote, or local, to allow hosting your layers in another process (a la Safari tabs). The context supports the notion of
fences: I presume slots are a way to pass context-related objects around remote contexts, but haven't tested the theory. Fences, however, can be used to delay the host app's transaction commit cycle until the client app (the one serving a remote layer) is done with its commit - essentially, it's used to synchronize rendered frames, and a fence has a natural timeout of about one second (so the remote layer server should finish its commit within this time). This ties into transaction phases: there are a few points in a
CATransaction that you can inject a handler: pre-layout, pre-commit, and post-commit. Combining fences with commit handlers, you can correctly synchronize remote rendered layers.
Some readers may arrive at a question here: if layers require a context, how does the context get rendered? I'm not too sure. I know there's a way to initialize a local render server and a way to hook into a remote render server (that is, on macOS,
windowserver), but I don't know how the contextId makes its way over or how the two link up. However, if you're creating a
CGSWindow, the fast way to get a
CALayer on-screen is to create a
CGSSurface and bind a
CAView to it. The header for
CAView is incomplete, but it looks trivial to work with, as it then manages the surface for you.
So, in conclusion, there are three different facets of the Core Animation API, intricate links to Core Graphics, via
CGContextDelegate, and some kind of
CAContext song-and-dance that allows a layer to be presented on-screen or in a buffer somewhere. I hope that demystifies a lot of the private API here for you. Drop me a line on Twitter if you think anything is incorrect or needs explaining!