Been prototyping an AR toy for a potential application, and I’ve dug deep with Apple’s ARKit and SceneKit.
In case you are still confusing the two: ARKit handles environmental recognition, telling the app if there’s a surface in scene (and if so, how far away is this surface and how big is it); SceneKit handles 3D rendering of your custom objects.
To create a realistic rendering you’d want ARKit to recognize the environment and SceneKit to handle rendering.
There’s also Apple’s ARKit section in the Human Interface Guidelines. This section provides basic do’s and don’ts on application and interaction design.
The only quarrel I’d have is that Mark used Objective-C in his example. Most developers have moved on to Swift. Apple’s official documentation is more or less tuned for the Swift community.
Understanding and applying PBR:
More in-depth articles on improving rendered results:
Some of my additional thoughts as a PM:
Hire a real 3D designer if your rendered 3D model goes beyond basic geometries. They know much better at modelling and generating proper texture layers and maps.
If you are trying to provide a fusion of real and virtual objects and your engineering team is not adept with SceneKit, tell them to go easy on research. Some basic lightings and physics are more than enough for a magical experience; you don’t need those complex lighting, blooming effects, shape distortion or smoke particle effects… After all, you are aiming for realistic, not surreal. SceneKit is a big, big, big rabbit hole to go down.
Apple has also previewed ARKit 1.5 bundled in the upcoming iOS 11.3. Prepare for recognition of vertical surfaces and images. Horizontal surfaces will no longer be a dumb rectangle. The community has, been, fervert, about, these, updates so you may want to get inspired, too.