WWDC22 Q&A - DevTools & Swift Lounge

Während der diesjährigen WWDC war ich auch in verschiedenen Slack-Kanälen von Apple. Unter anderem auch in Q&A-Sessions.

Developer Tools and Swift Lounge

Das hier ist eine lose Auswahl an Fragen und Antworten zum Thema "Developer Tools and Swift Lounge". Solltet ihr noch Interesse an bestimmten Dingen haben, dann meldet euch gerne bei mir. Ich kann in dem Slack danach suchen. 😃


Question: I’ve recently started a new project using Core Data. Should I endeavor to avoid DispatchQueues and NSManagedObjectContext.perform and instead model all concurrency via async/await and Tasks? Is there a downside to mixing the two approaches?

Answer: We understand that this interaction isn’t great right now. You can bridge between these worlds yourself by writing async functions that call perform and use continuations to wait until the operation is complete.


Question: About Minimal Strict Concurrency Checking, do we need to add Sendable conformance to struct and enum even though they are implicitly Sendable if we want to check Sendability?

Answer: The compiler will infer the Sendableconformance for these types unless there is an explicitly-non-Sendable type in their instance data. If you want to be sure that the types are Sendable, add the explicit conformance and the compiler will produce warnings for types used in instance data that aren't known to be Sendable.

Question: I have one more question. If structs and enums are implicitly Sendable,  does Xcode emit warning to this redundant explicit conformance when we change Checking mode?

Answer: No, it will not. If you place an explicit Sendable conformance on your struct or enum, that will suppress the implicit Sendable conformance.


Question: How can the Reader/Writer problem solved efficiently with Swift concurrency? Write operations should have exclusive access (serialized), readers can perform in parallel. Actors serialize everything, which is not optimal from a performance perspective.

Answer: In general, we have found that reader writer locks sound like a good idea in theory, but in practice they degrade to bad behaviour including lack of priority inversion avoidance support, starvation issues etc. Most use cases we have found are able to perform just as well while using an efficient lock implementation - like OSAllocatedUnfairLock or actors. If you have a motivating use case that needs RW locks, we’d like to hear about it, so please do file a Feedback request.

For more information on OSAllocatedUnfairLock, which is new in macOS Ventura and iOS 16, see https://developer.apple.com/documentation/os/osallocatedunfairlock.

I would add that if something is both heavily contended and heavily biased towards readers, the best solution is usually to jump all the way to a lockless algorithm like read-copy-update.

Even in the short term, if you can’t make make the leap to be truly lockless, holding a lock briefly while you copy or overwrite the value should have similar overhead to a reader/writer lock, and you’ll be forced to architect clients in a way that’s consistent with a lockless approach if you eventually adopt one.

Or you can simply use a lock normally but design your data so that reads spend very little time in the critical section, e.g. just copying a small COW data structure or object reference.

The key insight here is that acquiring a reader/writer lock is usually not cheaper than acquiring a mutex, so the only efficiency win is parallelism within reads, which you can mostly duplicate by simply reducing the amount of time you spend in the critical section during those reads.

if you want to integrate data structures with manual synchronization into an actor, such as lockless data structures or data structures with manually-tuned locking like what @John M (Apple) described, you could define them in a Sendable class and expose them in actors as nonisolated properties, which will allow code to access them from the actor without going through the actor's own queue (or even traditional pthread_rwlocks, if you've established your existing code benefits from using them).


Question: If Task { ... } is called from a MainActor, does that mean the Task will always run on the main thread?

Answer: It’ll start running on the main thread, but tasks flow between actors as needed.  In particular, if the task calls an async function which isn’t actor-isolated, it’ll switch off of the main actor while it’s in that function, and it’ll only switch back when it calls or returns to code that’s actor-isolated again.

There’s more about this in today’s session called “Eliminate data races with Swift Concurrency”: https://developer.apple.com/videos/play/wwdc2022/110351/.

Question: So an async function inside a MainActor makes an await call. That await will switch off of the main actor?

Answer: If it’s calling an async function that isn’t MainActor-isolated, yes.


Question: Do you have any recommendations for debugging race conditions when the repo uses a mix of completion handlers, combine pipelines, and async?

Answer: For debugging data races and other memory safety issues in an App, sanitizers are a good general purpose tool and a great starting place (TSAN, ASAN, UBSan). If you're interested in specifically analyzing Swift Tasks/Actors, the new Swift Concurrency Template in Instruments 14 (Visualize and optimize Swift Concurrency) can provide a lot of additional insight and may help debugging these types of race conditions.


Question: Is it possible to build an iOS app with SPM only without xcworkspace / xcproject and without falling back to generating one?

Answer: Yes, it’s possible! In Xcode, choose File > New > Project… and pick the Swift Playgrounds App template. These types of apps use the SwiftPM format rather than Xcode-style settings, so you’ll get a Package.swift file instead of an xcodproj.You could even use the Swift Playgrounds app on iPadOS for this! Here’s a guide to get you started in that direction if you want.


Question: Deleting ~/Library/Developer/Xcode/DerivedData is a common workaround for various Xcode problems, at least according to community wisdom. Does this make Xcode engineers grit their teeth, and if so, what should we be doing instead?

Answer: When deleting your derived data fixes a problem, it means that there's a dependency in your sources that either Xcode doesn't see but should, or that you haven't declared. In either case, if you could isolate the problem and report it in feedback, we'd use that to improve Xcode.


Question: Whats the recommended way to package up internal dependencies (ex: model-layer code, etc) in Xcode? Is it SPM or Frameworks?

Answer: The answer to this will depend on your team and your needs. Packages will make it easier to break your dependencies up into smaller chunks, store them in separate repos, and version them independently — but frameworks will give you things like advanced project customisation and Objective-C interoperability.


Question: We've got an app that's split up into many frameworks. There's a single project that contains all the targets. I've seen examples of apps that make multiple single-target projects in a workspace. Other than organization, is there any difference between those two approaches or are they functionally the same.

Answer: The two arrangements support the same features. If you'd like to open subsets of the projects without seeing all of the sources, you might prefer the organization with multiple project files.For example, if you later have two top level apps that use different subsets of the targets, they can each have a more focused workspace with the many projects approach.


Question: When managing workflow for Xcode Cloud the scheme I selected have a yellow exclamation mark saying "The scheme may only exist locally". I am pretty sure this scheme have shared checked and the xcsharedata folder is in the repo. I am wondering is because of the having this exclamation mark my xcode cloud build always failed with message "An internal error has occurred. This operation will be retried on another build worker".

Answer: The warning that “the scheme may only exist locally” shows up if you configure a workflow with a scheme that Xcode Cloud has never seen in a cloud build. Every time we run a build we record what public schemes were in your project. If you add a new scheme in a commit, Xcode on your machine knows about it but our servers don’t. You can go ahead and change the workflow anyways, it’s just warning you that builds might fail on branches that don’t have the scheme.As for the “an internal error occurred message,” please file a feedback with a link to the build. It might be the cause of the warning, if there’s a failure before we record the schemes.

The most common cause of package dependency issues is if Xcode Cloud can't access the Package.resolved file in your repo, see this documentation section. If that command is working locally, it means the file is available locally, but make sure it's checked in as well.


Question: Will/does Xcode Cloud have the ability to auto-update signing certificates when they will expire?

Answer: Xcode Cloud uses cloud signing certificates that it automatically manages on your behalf. It will renew certificates a few months before they expire so that your builds have valid signatures while you test and distribute them.


Question: Can you integrate Xcode Cloud with Fastlane?

Answer: You can connect Xcode Cloud to other services using webhooks. If there's more behavior you'd like to see supported in Xcode Cloud for your development workflow, please file a Feedback report at http://feedbackassistant.apple.com.

Xcode Cloud is also part of the App Store Connect API.

Alternatively, if you're trying to call Fastlane from Xcode Cloud, check out "Writing custom build scripts".


Question: Can Xcode Cloud support SSH package urls to GitHub package repos?

Answer: Yes, both HTTPS and ssh URLs are supported for cloning both the primary repo and package dependencies.


Question: I have a lot of extensions in our app so updating them for a new build requires going in to each one to update the version number and build number. Is there a way to automate this? using custom script?

Answer: When distributing your app from either Xcode or Xcode Cloud, the build numbers in your app and all of your app extensions will automatically be updated. For more information, watch Distribute apps in Xcode with cloud signing or check out this documentation for Xcode Cloud. You will still need to manage version numbers yourself, since those are more correlated to your own product release plans.


Question: Is there a good way to simulate or trigger an app termination due to memory pressure?  We're trying to clean up issues with static C++ variables being used after destruction in our app, called by a 3rd party sdk which appears to make use of atexit.  Architectural problems aside, it'd be nice to be able to reproduce the issue reliably.

Answer: You may induce memory pressure during while debugging in iOS Simulator, or while profiling in Instruments. This emulated condition is a good way to test your memory pressure notification handler, and a great way to ensure that state which you are managing using datatypes like NSCache are being released properly. However this emulated condition doesn't alter your app's memory usage.It is also worth noting that when the OS chooses to terminate your process for excessive memory usage, the process will not be able to execute any more code. Therefore, the atexit handler will not run during this termination condition.The best way to debug any issues you may suspect in your atexit handler in non-fatal termination situations is to call exit at an opportune time.


Question: In a modular project setup that has subprojects that generate frameworks, are there any recommendations/suggestions around a number of these subprojects that generate dylibs might be? I’ve heard of suggestions of staying near 6 (pre-dyld3) but am curious if there is still a suggested ceiling with all of the recent improvements over the past few years?

Answer: That’s a good question. We don’t have a specific recommendation on the number of dynamic libraries. As a rule of thumb it’s always a good idea to measure performance and bottlenecks first and make a decision based on results. App Launch template in Instruments can be used to profile app launch time on multiple physical device configurations. Don’t forget to avoid using any DYLD_* environment variables during these tests. To learn more about dyld enhancements this year: https://developer.apple.com/videos/play/wwdc2022/110362/.


Question: Is there any modern guidance on when to use Swift Packages vs. dynamically linked frameworks when sharing code internally across multiple targets? Say I have an iOS app, a Siri intent, and a widget - I've always relied on making a MyCoolKit framework with shared code and importing that framework into each user-facing target. Should I migrate to a Swift package instead? Are there pros and cons or tradeoffs to consider?

Answer: Packages can be a good way to organize your internal modules, especially if you may plan on splitting them out into separate SCM repositories in the future, e.g. to share them between multiple apps. For the concrete case of sharing code between an app and app extensions, frameworks would still be the tool of choice to share code at runtime. You are able to take advantage of both by having a shared framework between your app and extensions which links any local packages statically.


Question: We distribute a large number of binary Swift frameworks, and would love to be able to distribute them using SPM. How would you recommend constructing our package definitions so that our packages can correctly declare dependencies on each other?

Answer: I am assuming you are talking about the fact that a binary target cannot declare dependencies in the package manifest. This is a known issue we are already tracking, in the meantime, there is some discussion around a workaround on the Swift forums here: https://forums.swift.org/t/swiftpm-binary-target-with-sub-dependencies/40197/5


Question: We are developing a brand new SwiftUI app. Among other things, this includes 3 Swift packages, which were integrated via "Add local...". When developing, this works great for all developers, because these packages are all in the same place in the filesystem (outside the actual project). The repositories of these 3 packages are private GitHub repos. How do we need to set up both Xcode and Xcode Cloud with these 3 private repo packages to make it work? How can Xcode Cloud access these 3 packages if they were only added locally to the project?

Answer: In order for Xcode Cloud to have access to the three packages, the Xcode project that uses them needs to have URL references the repositories in which those packages reside.  That will cause Xcode Cloud to check them out after checking out the main repository and before building.You can still work with those packages locally by putting locally checked-out references to them in the same workspace as the main project in the local file system.  If the workspace contains a local checkout of a package, it will shadow a remote dependency of the same name.  In this way you can work with the three package dependencies locally, but have Xcode Cloud check them out from repositories.When committing changes to the main project you will want to make sure to also push any required changes to the three packages as well, and if needed, to create new tags for the main project to pull.  You may find it easiest to use branch dependencies for the packages if they are always developed together with the main project and not used from other projects.The article at https://developer.apple.com/documentation/xcode/editing-a-package-dependency-as-a-local-package has some more information about local editing workflows.

HINWEIS: Das war übrigens meine eigene Frage an die Apple Engineers. Ich habe zu dem Thema hier auch einen Blog Post verfasst.


© Woodbytes