异步函数在模拟器上在后台运行,但不在物理手机上运行

问题描述 投票:0回答:1

我正在尝试运行一个异步函数,它使用 Vision 框架来扫描图像中的文本。在父视图中,我在 Task { } 中调用此函数,该函数在模拟器上按预期工作 - UI 响应良好,并且在函数完成时更新输出文本。但是,在我的物理设备 (iPhone 13 Pro) 上运行相同的代码,运行此函数时 UI 会冻结,只有在函数完成时才会恢复。我知道我应该始终相信手机上的行为,而不是模拟器上的行为,那么我的代码有什么问题吗?预先感谢!

我的函数的代码(iOS 17.5,XCode 15.4):

func recognizeText(from image: UIImage) async {
        DispatchQueue.main.async {
            self.isLoading = true
        }
        guard let cgImage = image.cgImage else {
            self.isLoading = false
            return
        }
        
        let request = VNRecognizeTextRequest { [weak self] request, error in
            guard let self = self else { return }
            guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else {
                self.alertItem = AlertContext.invalidOCR
                self.isLoading = false
                return
            }

            let text = observations.compactMap { $0.topCandidates(1).first?.string }.joined(separator: "\n")
            DispatchQueue.main.async {
                self.recognizedText = text.isEmpty ? "No recognized texts. Please try again." : text
                self.isLoading = false
                
            }
        }
        request.recognitionLevel = .accurate

        let requestHandler = VNImageRequestHandler(cgImage: cgImage, options: [:])
        DispatchQueue.global(qos: .userInitiated).async {
            try? requestHandler.perform([request])
        }
    }
swift swiftui async-await
1个回答
0
投票

一些观察:

  1. 您使用的 QoS 为

    .userInitiated
    。您可能会发现,在执行计算密集型操作时使用它可能会在 UI 中引入微挂起。我可能建议使用
    .utility
    ,它对性能的影响可以忽略不计,但可以最大限度地减少这些问题。

  2. 您是正确的,您应该始终在设备而不是模拟器上进行测试。我还建议您在优化的“发布”构建而不是“调试”构建上测试性能。在我的测试中,我在调试版本中看到 UI 中出现微挂起,但是当我切换到发布版本时,这些都消失了。


一般来说,建议避免将 GCD 与 Swift 并发结合使用。但是,当调用某些缓慢且同步的内容时(例如

perform
),必须将其排除在 Swift 并发协作线程池之外。我们与 Swift 并发性签订了合同,永远不会阻碍该线程池的“前进进度”(因为它非常有限)。回到 WWDC 2022 视频可视化和优化 Swift 并发 Apple 明确建议将此类工作保留在 GCD 中,并使用
withCheckedContinuation
(及其不安全或抛出的兄弟)将其桥接到 Swift 并发。

现在,我们可能不会在代码库中引入大量 GCD 代码,而是使用

actor
自定义执行器 将其排除在 Swift 并发协作线程池之外。因此,我们将淘汰大部分 GCD API,但是 do 为你的 actor 的执行者提供一个自定义的 GCD 队列:

actor TextRecognizer {
    private let queue = DispatchSerialQueue(label: Bundle.main.bundleIdentifier! + ".TextRecognizer", qos: .utility)

    nonisolated var unownedExecutor: UnownedSerialExecutor {
        queue.asUnownedSerialExecutor()
    }

    func text(from cgImage: CGImage) async throws -> String {
        try await withCheckedThrowingContinuation { (continuation: CheckedContinuation<String, Error>) in
            do {
                let request = VNRecognizeTextRequest { request, error in
                    guard
                        let observations = request.results as? [VNRecognizedTextObservation],
                        error == nil
                    else {
                        continuation.resume(throwing: error ?? AlertContext.invalidOCR)
                        return
                    }

                    let text = observations
                        .compactMap { $0.topCandidates(1).first?.string }
                        .joined(separator: "\n")

                    guard !text.isEmpty else {
                        continuation.resume(throwing: AlertContext.noRecognizedText)
                        return
                    }

                    continuation.resume(returning: text)
                }
                request.recognitionLevel = .accurate

                let requestHandler = VNImageRequestHandler(cgImage: cgImage)

                // to confirm that this is running on the correct queue, you could add a precondition:
                //
                // dispatchPrecondition(condition: .onQueue(queue))

                try requestHandler.perform([request])
            } catch {
                continuation.resume(throwing: error)
            }
        }
    }
}

FWIW,这是我的完整内容 MRE

import SwiftUI
import Observation
import Vision
import os.log

struct ContentView: View {
    @State var viewModel = ViewModel()

    var body: some View {
        VStack(spacing: 16) {
            Image(systemName: "text.rectangle.page")
                .imageScale(.large)
                .foregroundStyle(.tint)

            Text("Image Processor")

            // show recognized text, if any

            if let recognizedText = viewModel.recognizedText, !recognizedText.isEmpty {
                Text(viewModel.recognizedText ?? "")
                    .lineLimit(5)
            }

            // show elapsed time if not zero

            if viewModel.elapsed != .zero {
                Text("\(viewModel.elapsed.seconds, specifier: "%0.2f") seconds")
                    .monospacedDigit()
            }

            // show spinner if loading

            if viewModel.isLoading {
                ProgressView()
                    .progressViewStyle(CircularProgressViewStyle())
            }

            // show error, if any

            if let error = viewModel.alertItem {
                Text(error.localizedDescription)
                    .foregroundStyle(.red)
            }

            // button to start recognition

            Button("Start") {
                Task {
                    let image = UIImage(named: "snapshot")
                    await viewModel.recognizeText(from: image)
                }
            }
        }
        .padding()
    }
}

@Observable
@MainActor
class ViewModel {
    var isLoading = false
    var alertItem: Error?
    var recognizedText: String?
    var elapsed: ContinuousClock.Duration = .zero

    private let logger = Logger(subsystem: Bundle.main.bundleIdentifier!, category: "ViewModel")

    func startTimer() async {
        let start = ContinuousClock().now
        elapsed = .zero

        while !Task.isCancelled {
            try? await Task.sleep(for: .milliseconds(10))
            elapsed = .now - start
        }
    }

    func recognizeText(from image: UIImage?) async {
        guard let image else {
            alertItem = AlertContext.imageNotFound
            return
        }

        alertItem = nil

        guard let cgImage = image.cgImage else {
            alertItem = AlertContext.imageNotFound
            isLoading = false
            return
        }

        let timerTask = Task { await startTimer() }

        do {
            isLoading = true
            defer {
                isLoading = false
                timerTask.cancel()
            }

            let recognizer = TextRecognizer()
            recognizedText = try await recognizer.text(from: cgImage)
        } catch {
            logger.error("\(error)")
            alertItem = error
        }
    }
}

actor TextRecognizer {
    private let queue = DispatchSerialQueue(label: Bundle.main.bundleIdentifier! + ".TextRecognizer", qos: .utility)
    let poi = OSSignposter(subsystem: "TextRecognizer", category: .pointsOfInterest)

    nonisolated var unownedExecutor: UnownedSerialExecutor {
        queue.asUnownedSerialExecutor()
    }

    func text(from cgImage: CGImage) async throws -> String {
        let state = poi.beginInterval(#function, id: poi.makeSignpostID())
        defer { poi.endInterval(#function, state) }

        return try await withCheckedThrowingContinuation { (continuation: CheckedContinuation<String, Error>) in
            do {
                let request = VNRecognizeTextRequest { request, error in
                    guard
                        let observations = request.results as? [VNRecognizedTextObservation],
                        error == nil
                    else {
                        continuation.resume(throwing: error ?? AlertContext.invalidOCR)
                        return
                    }

                    let text = observations
                        .compactMap { $0.topCandidates(1).first?.string }
                        .joined(separator: "\n")

                    guard !text.isEmpty else {
                        continuation.resume(throwing: AlertContext.noRecognizedText)
                        return
                    }

                    continuation.resume(returning: text)
                }
                request.recognitionLevel = .accurate

                let requestHandler = VNImageRequestHandler(cgImage: cgImage)

                // to confirm that this is running on the correct queue, you could add a precondition:
                //
                // dispatchPrecondition(condition: .onQueue(queue))

                try requestHandler.perform([request])
            } catch {
                continuation.resume(throwing: error)
            }
        }
    }
}

enum AlertContext: LocalizedError {
    case invalidOCR
    case noRecognizedText
    case imageNotFound

    var errorDescription: String? {
        return switch self {
            case .invalidOCR:       String(localized: "Problem recognizing")
            case .noRecognizedText: String(localized: "No recognized text")
            case .imageNotFound:    String(localized: "Image not found")
        }
    }
}

extension Duration {
    var seconds: Double {
        let (seconds, attoseconds) = components
        return Double(seconds) + Double(attoseconds) / 1e18
    }
}

注意,我通过使用上面的

DispatchQueue.main.async {…}
隔离来退役所有
MainActor
代码。

但抛开这一点,我使用“Time Profiler”模板在 Instruments 中对此进行了分析,并配置了“Hangs”工具来报告“包括所有潜在的交互延迟”:

microhangs

当我分析该应用程序时,我们从“Hangs”时间表中得到了一份干净的健康证明:

no hangs

底线,在具有

.utility
QoS 的物理设备(无疑是顶级 iPhone)上测试“发布”版本,我发现 UI 中没有中断。现在,功能较差的设备可能会出现暂时的故障,但这是最好的设备。

© www.soinside.com 2019 - 2024. All rights reserved.