Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector

Soyed Tuhin Ahmed, Mehdi Tahoori·May 29, 2024

Summary

The paper presents a Bayesian test vector generation framework for memristive deep neural networks (MNNs) on memory-centric hardware, addressing non-idealities like device defects and variations. The method estimates uncertainty without requiring hardware changes, model modifications, or extensive training, making it suitable for safety-critical applications. It focuses on achieving 100% coverage across different model dimensions, tasks, and fault scenarios, including bit- and level-flip faults, and manufacturing variations. The framework uses Bayesian inference, Monte Carlo sampling, and gradient-based optimization to estimate uncertainty, maintaining accuracy even under non-ideal conditions. The study demonstrates high coverage and performance in various models and tasks, with a focus on minimizing latency, energy consumption, and storage requirements. The approach outperforms existing methods in terms of resource efficiency and adaptability, ensuring reliable operation in memristive hardware.

Key findings

11

Paper digest

What problem does the paper attempt to solve? Is this a new problem?

The paper aims to address the problem of estimating uncertainty in memristive deep neural networks using one Bayesian test vector . This problem is not entirely new, as uncertainty estimation in neural networks has been a topic of research interest, but the specific focus on memristive deep neural networks and the proposed method for uncertainty estimation using one Bayesian test vector is a novel approach to tackle this issue .


What scientific hypothesis does this paper seek to validate?

This paper aims to validate the scientific hypothesis related to estimating uncertainty in memristive deep neural networks using one Bayesian test vector . The focus is on understanding and accounting for memristive model uncertainty, which is a special type of model uncertainty introduced by non-idealities in memristive neural networks . The research explores methods to accurately estimate and address this memristive model uncertainty to enhance the reliability and robustness of neural network accelerators in real-world applications .


What new ideas, methods, or models does the paper propose? What are the characteristics and advantages compared to previous methods?

The paper "Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector" proposes several innovative ideas, methods, and models related to uncertainty estimation in deep neural networks . Here are the key points:

  1. Uncertainty Estimation Method: The paper introduces a method to estimate uncertainty in deep neural networks with high coverage percentages across various learning paradigms, including classification, semantic segmentation, and generative methods . This method emphasizes the applicability of uncertainty estimation in post-manufacturing and online operations of Computational In-Memory (CIM) systems .

  2. Coverage and Accuracy Degradation: The proposed method can achieve 100% coverage for uncertainty estimates, even with 1-2% accuracy degradation . It also addresses the impact of accuracy degradation on false-negative uncertainty estimates, highlighting the trade-offs in accuracy and coverage .

  3. Layer-wise Uncertainty Estimation: The paper evaluates the uncertainty estimation approach concerning the layers affected by memristive non-idealities in neural networks . It demonstrates that uncertainty estimation coverage can reach 100% when a certain percentage of layers are affected by variations or faults .

  4. Threshold Value Analysis: The study analyzes the impact of the threshold value on coverage, emphasizing the importance of choosing the right threshold to achieve high uncertainty estimation coverage and reduce the risk of false-positive or negative uncertainty estimates .

  5. Resolution of Uncertainty Estimation: The paper explores the resolution of uncertainty estimation by conducting experiments with lower noise scales and fault rates, demonstrating the method's ability to estimate uncertainty effectively even under challenging conditions .

In summary, the paper presents a comprehensive approach to estimating uncertainty in deep neural networks, addressing various factors such as coverage, accuracy degradation, layer-wise estimation, threshold value selection, and resolution under different noise conditions and fault rates. The proposed method for estimating uncertainty in memristive deep neural networks offers several key characteristics and advantages compared to previous methods, as detailed in the paper "Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector" .

  1. Single Bayesian Test Vector Framework: The method introduces a single Bayesian test vector generation framework optimized to provide low uncertainty output for fault- and variation-free memristive neural networks. This framework requires only a single sampling from the distribution of the Bayesian test vector and a single forward pass, reducing latency and energy consumption significantly compared to methods using multiple test vectors .

  2. High Coverage and Robustness: The proposed approach consistently achieves 100% coverage for uncertainty estimates under various fault rates and variations, demonstrating its robustness across different scenarios. This high coverage percentage exceeds the rates achieved by related methods, ensuring reliable uncertainty estimation in diverse conditions .

  3. Layer-wise Uncertainty Estimation: The method evaluates uncertainty estimation when a subset of layers in neural networks is affected by memristive non-idealities. It shows that uncertainty estimation coverage can reach 100% when a certain percentage of layers are impacted, even with a small number of affected layers, enabling effective estimation of uncertainty due to faults or variations leading to accuracy degradation .

  4. Threshold Value Analysis: The analysis of the impact of the threshold value on coverage highlights the importance of choosing the right threshold to achieve high uncertainty estimation coverage and reduce the risk of false-positive or negative uncertainty estimates. By selecting the threshold value appropriately, the method can mitigate the occurrence of inaccurate uncertainty estimates .

  5. Comparison with Related Works: Comparative analysis with related methods employing point estimate test vectors and Bayesian optimized test vectors shows that the single Bayesian test vector outperforms these methods in terms of coverage and efficiency. The proposed approach achieves superior performance in uncertainty estimation metrics while using fewer resources, leading to reduced latency and energy consumption .

In summary, the proposed method stands out for its efficiency in uncertainty estimation, high coverage rates, robustness across fault and variation scenarios, layer-wise evaluation, optimal threshold value selection, and superior performance compared to existing methods, making it a promising approach for estimating uncertainty in memristive deep neural networks.


Do any related researches exist? Who are the noteworthy researchers on this topic in this field?What is the key to the solution mentioned in the paper?

Several related research studies exist in the field of estimating uncertainty of memristive deep neural networks. Noteworthy researchers in this area include L.-H. Tsai et al. , G. Huang et al. , O. Ronneberger et al. , M. Buda et al. , J. Long et al. , T.-Y. Lin et al. , A. Radford et al. , N. Rostamzadeh et al. , D. P. Kingma and M. Welling , and S. T. Ahmed and M. B. Tahoori .

The key to the solution mentioned in the paper for estimating uncertainty of memristive deep neural networks is the development of a Bayesian test vector generation framework. This framework allows for the estimation of model uncertainty of conventional neural networks implemented on a memristor-based CIM hardware accelerator. The method does not require changes to common CIM architectures, is generalizable across different model dimensions, does not need access to training data, makes minimal changes to pre-trained model parameters, requires only one test vector stored in hardware, and achieves high uncertainty estimation coverage across various scenarios .


How were the experiments in the paper designed?

The experiments in the paper were designed to evaluate the uncertainty estimation coverage of memristive deep neural networks under various fault rates, noise scales, and variations. The experiments involved injecting faults and variations into different models and datasets to assess the ability of the proposed method to estimate uncertainty accurately . The experiments covered diverse learning paradigms such as classification, semantic segmentation, and generative methods to demonstrate the applicability of the method in post-manufacturing and online operations of computational in-memory systems . Additionally, the experiments focused on evaluating uncertainty due to bit- and level-flip faults, achieving consistent 100% coverage for uncertainty estimates across different fault rates and variations . The experiments also included the analysis of the impact of the threshold value on coverage to ensure high uncertainty estimation coverage and minimize false-positive or negative uncertainty estimates .


What is the dataset used for quantitative evaluation? Is the code open source?

The dataset used for quantitative evaluation in the study is the CIFAR-10 dataset . The code for the PyTorch CIFAR models used in the research is available as open source on GitHub at the following link: https://github.com/chenyaofo/pytorch-cifar-models .


Do the experiments and results in the paper provide good support for the scientific hypotheses that need to be verified? Please analyze.

The experiments and results presented in the paper provide strong support for the scientific hypotheses that needed verification. The study extensively evaluated the approach in various scenarios, including the impact of memristive non-idealities on neural network (NN) layers, fault rates, and noise scales . The experiments demonstrated that the uncertainty estimation approach could achieve 100% coverage for uncertainty estimates, even with 1-2% accuracy degradation, highlighting the robustness of the method . Additionally, the paper compared the proposed approach with existing methods using different performance metrics, showing high coverage percentages and minimal overhead associated with the approach . These analyses and comparisons indicate that the experiments conducted in the paper effectively validate the scientific hypotheses and contribute to the advancement of uncertainty estimation in memristive deep neural networks.


What are the contributions of this paper?

The paper makes several key contributions:

  • It proposes a single Bayesian test vector generation framework to estimate the uncertainty of memristive neural networks, specifically optimized to provide low uncertainty output for fault- and variation-free memristive NNs .
  • The method can estimate uncertainty with 100% coverage, even with 1-2% accuracy degradation, showcasing its robustness in uncertainty estimation .
  • The approach is evaluated across diverse learning paradigms such as classification, semantic segmentation, and generative methods, demonstrating high coverage percentages under various noise conditions .
  • The paper provides a detailed evaluation of the uncertainty estimation coverage for different neural network models and datasets under varying noise strengths, showing consistent achievement of 100% coverage under diverse conditions .
  • It also explores layer-wise uncertainty estimation, analyzing the impact of threshold values on coverage, and resolving uncertainty estimation with minimal noise scales and fault rates .
  • The method requires minimal storage overhead for the Bayesian test vector and offers a trade-off between the number of MC samples required, the number of neurons in the penultimate layer, and storage overhead, making it memory-efficient .
  • Additionally, the paper evaluates the coverage of the uncertainty estimation approach with faults and variations injected into a random subset of all layers, showcasing the method's ability to estimate uncertainty due to bit- and level-flip faults with 100% coverage .

What work can be continued in depth?

To delve deeper into the work presented, further exploration can be conducted in the following areas:

  • Verifiable Uncertainty Estimates: Enhancing the confidence in uncertainty estimates by modifying equations to calculate coverage based on inference accuracy after fault or variation injection compared to baseline accuracy .
  • Estimating Uncertainty of Bit- and Level-flip: Continuing the evaluation of uncertainty estimation due to bit- and level-flip faults, especially for models susceptible to these types of faults, to ensure robustness and reliability .
  • Evaluation of Uncertainty with Accuracy Degradation: Further investigating the estimation of uncertainty with subtle accuracy degradation, particularly focusing on scenarios where uncertainty is challenging to detect, to improve detection methods and reliability .
  • Comprehensive Evaluation of Uncertainty Estimation: Conducting a thorough assessment of uncertainty estimation coverage across various neural network models under different noise strengths and fault rates to ensure consistent and reliable coverage in diverse conditions .

Tables

7

Introduction
Background
Emergence of memristive hardware for DNNs
Challenges with device defects and variations
Objective
To develop a framework for MNNs' reliable operation
Address non-idealities without hardware changes or extensive training
Aim for 100% coverage and safety-critical applications
Method
Data Collection and Estimation
Bayesian Inference
Integration of prior knowledge and observations
Monte Carlo Sampling
Simulating fault scenarios for uncertainty estimation
Optimization Techniques
Gradient-Based Optimization
Minimizing impact on model accuracy under non-ideal conditions
Test Vector Generation
Bit- and Level-Flip Faults
Handling various fault types in memristive devices
Manufacturing Variations
Accounting for inconsistencies in hardware
Performance Metrics
Coverage across dimensions, tasks, and fault scenarios
Focus on latency, energy consumption, and storage efficiency
Evaluation
Model and Task Selection
Wide range of models and tasks for demonstration
Comparison with Existing Methods
Outperformance in resource efficiency and adaptability
Applications and Benefits
Ensuring reliable operation in memristive hardware
Suitability for safety-critical applications
Conclusion
Summary of key contributions and implications for future research
Basic info
papers
emerging technologies
machine learning
artificial intelligence
Advanced features
Insights
How does the proposed method compare to existing methods in terms of resource efficiency and adaptability for safety-critical applications?
What are the key techniques employed by the framework to estimate uncertainty in MNNs?
How does the framework address non-idealities in memristive hardware without hardware changes?
What is the primary goal of the Bayesian test vector generation framework for MNNs?

Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector

Soyed Tuhin Ahmed, Mehdi Tahoori·May 29, 2024

Summary

The paper presents a Bayesian test vector generation framework for memristive deep neural networks (MNNs) on memory-centric hardware, addressing non-idealities like device defects and variations. The method estimates uncertainty without requiring hardware changes, model modifications, or extensive training, making it suitable for safety-critical applications. It focuses on achieving 100% coverage across different model dimensions, tasks, and fault scenarios, including bit- and level-flip faults, and manufacturing variations. The framework uses Bayesian inference, Monte Carlo sampling, and gradient-based optimization to estimate uncertainty, maintaining accuracy even under non-ideal conditions. The study demonstrates high coverage and performance in various models and tasks, with a focus on minimizing latency, energy consumption, and storage requirements. The approach outperforms existing methods in terms of resource efficiency and adaptability, ensuring reliable operation in memristive hardware.
Mind map
Accounting for inconsistencies in hardware
Handling various fault types in memristive devices
Minimizing impact on model accuracy under non-ideal conditions
Simulating fault scenarios for uncertainty estimation
Integration of prior knowledge and observations
Outperformance in resource efficiency and adaptability
Wide range of models and tasks for demonstration
Focus on latency, energy consumption, and storage efficiency
Coverage across dimensions, tasks, and fault scenarios
Manufacturing Variations
Bit- and Level-Flip Faults
Gradient-Based Optimization
Monte Carlo Sampling
Bayesian Inference
Aim for 100% coverage and safety-critical applications
Address non-idealities without hardware changes or extensive training
To develop a framework for MNNs' reliable operation
Challenges with device defects and variations
Emergence of memristive hardware for DNNs
Summary of key contributions and implications for future research
Suitability for safety-critical applications
Ensuring reliable operation in memristive hardware
Comparison with Existing Methods
Model and Task Selection
Performance Metrics
Test Vector Generation
Optimization Techniques
Data Collection and Estimation
Objective
Background
Conclusion
Applications and Benefits
Evaluation
Method
Introduction
Outline
Introduction
Background
Emergence of memristive hardware for DNNs
Challenges with device defects and variations
Objective
To develop a framework for MNNs' reliable operation
Address non-idealities without hardware changes or extensive training
Aim for 100% coverage and safety-critical applications
Method
Data Collection and Estimation
Bayesian Inference
Integration of prior knowledge and observations
Monte Carlo Sampling
Simulating fault scenarios for uncertainty estimation
Optimization Techniques
Gradient-Based Optimization
Minimizing impact on model accuracy under non-ideal conditions
Test Vector Generation
Bit- and Level-Flip Faults
Handling various fault types in memristive devices
Manufacturing Variations
Accounting for inconsistencies in hardware
Performance Metrics
Coverage across dimensions, tasks, and fault scenarios
Focus on latency, energy consumption, and storage efficiency
Evaluation
Model and Task Selection
Wide range of models and tasks for demonstration
Comparison with Existing Methods
Outperformance in resource efficiency and adaptability
Applications and Benefits
Ensuring reliable operation in memristive hardware
Suitability for safety-critical applications
Conclusion
Summary of key contributions and implications for future research
Key findings
11

Paper digest

What problem does the paper attempt to solve? Is this a new problem?

The paper aims to address the problem of estimating uncertainty in memristive deep neural networks using one Bayesian test vector . This problem is not entirely new, as uncertainty estimation in neural networks has been a topic of research interest, but the specific focus on memristive deep neural networks and the proposed method for uncertainty estimation using one Bayesian test vector is a novel approach to tackle this issue .


What scientific hypothesis does this paper seek to validate?

This paper aims to validate the scientific hypothesis related to estimating uncertainty in memristive deep neural networks using one Bayesian test vector . The focus is on understanding and accounting for memristive model uncertainty, which is a special type of model uncertainty introduced by non-idealities in memristive neural networks . The research explores methods to accurately estimate and address this memristive model uncertainty to enhance the reliability and robustness of neural network accelerators in real-world applications .


What new ideas, methods, or models does the paper propose? What are the characteristics and advantages compared to previous methods?

The paper "Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector" proposes several innovative ideas, methods, and models related to uncertainty estimation in deep neural networks . Here are the key points:

  1. Uncertainty Estimation Method: The paper introduces a method to estimate uncertainty in deep neural networks with high coverage percentages across various learning paradigms, including classification, semantic segmentation, and generative methods . This method emphasizes the applicability of uncertainty estimation in post-manufacturing and online operations of Computational In-Memory (CIM) systems .

  2. Coverage and Accuracy Degradation: The proposed method can achieve 100% coverage for uncertainty estimates, even with 1-2% accuracy degradation . It also addresses the impact of accuracy degradation on false-negative uncertainty estimates, highlighting the trade-offs in accuracy and coverage .

  3. Layer-wise Uncertainty Estimation: The paper evaluates the uncertainty estimation approach concerning the layers affected by memristive non-idealities in neural networks . It demonstrates that uncertainty estimation coverage can reach 100% when a certain percentage of layers are affected by variations or faults .

  4. Threshold Value Analysis: The study analyzes the impact of the threshold value on coverage, emphasizing the importance of choosing the right threshold to achieve high uncertainty estimation coverage and reduce the risk of false-positive or negative uncertainty estimates .

  5. Resolution of Uncertainty Estimation: The paper explores the resolution of uncertainty estimation by conducting experiments with lower noise scales and fault rates, demonstrating the method's ability to estimate uncertainty effectively even under challenging conditions .

In summary, the paper presents a comprehensive approach to estimating uncertainty in deep neural networks, addressing various factors such as coverage, accuracy degradation, layer-wise estimation, threshold value selection, and resolution under different noise conditions and fault rates. The proposed method for estimating uncertainty in memristive deep neural networks offers several key characteristics and advantages compared to previous methods, as detailed in the paper "Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector" .

  1. Single Bayesian Test Vector Framework: The method introduces a single Bayesian test vector generation framework optimized to provide low uncertainty output for fault- and variation-free memristive neural networks. This framework requires only a single sampling from the distribution of the Bayesian test vector and a single forward pass, reducing latency and energy consumption significantly compared to methods using multiple test vectors .

  2. High Coverage and Robustness: The proposed approach consistently achieves 100% coverage for uncertainty estimates under various fault rates and variations, demonstrating its robustness across different scenarios. This high coverage percentage exceeds the rates achieved by related methods, ensuring reliable uncertainty estimation in diverse conditions .

  3. Layer-wise Uncertainty Estimation: The method evaluates uncertainty estimation when a subset of layers in neural networks is affected by memristive non-idealities. It shows that uncertainty estimation coverage can reach 100% when a certain percentage of layers are impacted, even with a small number of affected layers, enabling effective estimation of uncertainty due to faults or variations leading to accuracy degradation .

  4. Threshold Value Analysis: The analysis of the impact of the threshold value on coverage highlights the importance of choosing the right threshold to achieve high uncertainty estimation coverage and reduce the risk of false-positive or negative uncertainty estimates. By selecting the threshold value appropriately, the method can mitigate the occurrence of inaccurate uncertainty estimates .

  5. Comparison with Related Works: Comparative analysis with related methods employing point estimate test vectors and Bayesian optimized test vectors shows that the single Bayesian test vector outperforms these methods in terms of coverage and efficiency. The proposed approach achieves superior performance in uncertainty estimation metrics while using fewer resources, leading to reduced latency and energy consumption .

In summary, the proposed method stands out for its efficiency in uncertainty estimation, high coverage rates, robustness across fault and variation scenarios, layer-wise evaluation, optimal threshold value selection, and superior performance compared to existing methods, making it a promising approach for estimating uncertainty in memristive deep neural networks.


Do any related researches exist? Who are the noteworthy researchers on this topic in this field?What is the key to the solution mentioned in the paper?

Several related research studies exist in the field of estimating uncertainty of memristive deep neural networks. Noteworthy researchers in this area include L.-H. Tsai et al. , G. Huang et al. , O. Ronneberger et al. , M. Buda et al. , J. Long et al. , T.-Y. Lin et al. , A. Radford et al. , N. Rostamzadeh et al. , D. P. Kingma and M. Welling , and S. T. Ahmed and M. B. Tahoori .

The key to the solution mentioned in the paper for estimating uncertainty of memristive deep neural networks is the development of a Bayesian test vector generation framework. This framework allows for the estimation of model uncertainty of conventional neural networks implemented on a memristor-based CIM hardware accelerator. The method does not require changes to common CIM architectures, is generalizable across different model dimensions, does not need access to training data, makes minimal changes to pre-trained model parameters, requires only one test vector stored in hardware, and achieves high uncertainty estimation coverage across various scenarios .


How were the experiments in the paper designed?

The experiments in the paper were designed to evaluate the uncertainty estimation coverage of memristive deep neural networks under various fault rates, noise scales, and variations. The experiments involved injecting faults and variations into different models and datasets to assess the ability of the proposed method to estimate uncertainty accurately . The experiments covered diverse learning paradigms such as classification, semantic segmentation, and generative methods to demonstrate the applicability of the method in post-manufacturing and online operations of computational in-memory systems . Additionally, the experiments focused on evaluating uncertainty due to bit- and level-flip faults, achieving consistent 100% coverage for uncertainty estimates across different fault rates and variations . The experiments also included the analysis of the impact of the threshold value on coverage to ensure high uncertainty estimation coverage and minimize false-positive or negative uncertainty estimates .


What is the dataset used for quantitative evaluation? Is the code open source?

The dataset used for quantitative evaluation in the study is the CIFAR-10 dataset . The code for the PyTorch CIFAR models used in the research is available as open source on GitHub at the following link: https://github.com/chenyaofo/pytorch-cifar-models .


Do the experiments and results in the paper provide good support for the scientific hypotheses that need to be verified? Please analyze.

The experiments and results presented in the paper provide strong support for the scientific hypotheses that needed verification. The study extensively evaluated the approach in various scenarios, including the impact of memristive non-idealities on neural network (NN) layers, fault rates, and noise scales . The experiments demonstrated that the uncertainty estimation approach could achieve 100% coverage for uncertainty estimates, even with 1-2% accuracy degradation, highlighting the robustness of the method . Additionally, the paper compared the proposed approach with existing methods using different performance metrics, showing high coverage percentages and minimal overhead associated with the approach . These analyses and comparisons indicate that the experiments conducted in the paper effectively validate the scientific hypotheses and contribute to the advancement of uncertainty estimation in memristive deep neural networks.


What are the contributions of this paper?

The paper makes several key contributions:

  • It proposes a single Bayesian test vector generation framework to estimate the uncertainty of memristive neural networks, specifically optimized to provide low uncertainty output for fault- and variation-free memristive NNs .
  • The method can estimate uncertainty with 100% coverage, even with 1-2% accuracy degradation, showcasing its robustness in uncertainty estimation .
  • The approach is evaluated across diverse learning paradigms such as classification, semantic segmentation, and generative methods, demonstrating high coverage percentages under various noise conditions .
  • The paper provides a detailed evaluation of the uncertainty estimation coverage for different neural network models and datasets under varying noise strengths, showing consistent achievement of 100% coverage under diverse conditions .
  • It also explores layer-wise uncertainty estimation, analyzing the impact of threshold values on coverage, and resolving uncertainty estimation with minimal noise scales and fault rates .
  • The method requires minimal storage overhead for the Bayesian test vector and offers a trade-off between the number of MC samples required, the number of neurons in the penultimate layer, and storage overhead, making it memory-efficient .
  • Additionally, the paper evaluates the coverage of the uncertainty estimation approach with faults and variations injected into a random subset of all layers, showcasing the method's ability to estimate uncertainty due to bit- and level-flip faults with 100% coverage .

What work can be continued in depth?

To delve deeper into the work presented, further exploration can be conducted in the following areas:

  • Verifiable Uncertainty Estimates: Enhancing the confidence in uncertainty estimates by modifying equations to calculate coverage based on inference accuracy after fault or variation injection compared to baseline accuracy .
  • Estimating Uncertainty of Bit- and Level-flip: Continuing the evaluation of uncertainty estimation due to bit- and level-flip faults, especially for models susceptible to these types of faults, to ensure robustness and reliability .
  • Evaluation of Uncertainty with Accuracy Degradation: Further investigating the estimation of uncertainty with subtle accuracy degradation, particularly focusing on scenarios where uncertainty is challenging to detect, to improve detection methods and reliability .
  • Comprehensive Evaluation of Uncertainty Estimation: Conducting a thorough assessment of uncertainty estimation coverage across various neural network models under different noise strengths and fault rates to ensure consistent and reliable coverage in diverse conditions .
Tables
7
Scan the QR code to ask more questions about the paper
© 2025 Powerdrill. All rights reserved.