STAR: SocioTechnical Approach to Red Teaming Language Models

Laura Weidinger, John Mellor, Bernat Guillen Pegueroles, Nahema Marchal, Ravin Kumar, Kristian Lum, Canfer Akbulut, Mark Diaz, Stevie Bergman, Mikel Rodriguez, Verena Rieser, William Isaac·June 17, 2024

Summary

The paper presents the SocioTechnical Approach to Red Teaming (STAR), a framework designed to enhance AI safety assessments, particularly for large language models. STAR improves red teaming by providing parameterized instructions for testers, ensuring comprehensive coverage of risks, and addressing demographic biases. It introduces an arbitration step to leverage diverse viewpoints, increasing label reliability. The study focuses on challenges like uneven attack coverage, signal quality, and ensuring diverse perspectives. STAR's content-agnostic approach targets hate speech, stereotypes, and demographic-specific harms, using sociotechnical methods involving experts and diverse teams. The research involves collecting and analyzing data from 8360 dialogues, comparing different red teaming datasets, and examining the model's performance on various dimensions. The study highlights the benefits of STAR's controlled exploration, balanced approach, and the need for broader coverage in AI safety assessments.

Key findings

10

Paper digest

What problem does the paper attempt to solve? Is this a new problem?

The paper aims to address the challenges in red teaming safety of large language models by introducing a sociotechnical framework called STAR. This framework enhances steerability by providing parameterized instructions for human red teamers, leading to improved coverage of the risk surface and more detailed insights into model failures without increased costs. It also improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations . This is a new problem as the paper highlights the lack of consensus on best practices in red teaming, hindering the progress of safety research in AI and making it challenging for the public to assess AI safety .


What scientific hypothesis does this paper seek to validate?

This paper seeks to validate the scientific hypothesis that the STAR framework enhances red teaming safety for large language models by improving steerability through generating parameterized instructions for human red teamers and improving signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations .


What new ideas, methods, or models does the paper propose? What are the characteristics and advantages compared to previous methods?

The paper "STAR: SocioTechnical Approach to Red Teaming Language Models" introduces a novel framework that enhances red teaming safety for large language models by improving steerability and signal quality . The framework makes two key contributions:

  1. Enhanced Steerability: STAR enhances steerability by providing parameterized instructions for human red teamers, ensuring comprehensive coverage of the risk surface and detailed insights into model failures without increased costs .
  2. Improved Signal Quality: STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. It also employs arbitration to leverage diverse viewpoints and improve label reliability .

The paper addresses challenges in red teaming by integrating parametric instructions with demographic matching and arbitration techniques . These interventions enable comprehensive exploration of a model's risk surface and provide high-quality signals. The paper also introduces a principled process for generating instructions, aiding in creating reproducible processes for generating comparable red teaming datasets .

Furthermore, the paper demonstrates that STAR can target specific risk areas effectively, leading to nuanced findings about model failure modes without additional costs . By providing structured, parameterized instructions, STAR allows for intentional control over the target area without resulting in higher clustering of resulting dialogues. This approach enables more nuanced coverage of failure modes, revealing insights into social marginalization and discriminatory stereotypes . The "STAR: SocioTechnical Approach to Red Teaming Language Models" paper introduces a novel framework that offers several key characteristics and advantages compared to previous methods in red teaming for AI models .

  1. Enhanced Steerability: STAR enhances steerability by providing parameterized instructions for human red teamers, ensuring comprehensive coverage of the risk surface and detailed insights into model failures without increased costs . This approach allows for intentional control over the target area without resulting in higher clustering of resulting dialogues, enabling more nuanced coverage of failure modes .

  2. Improved Signal Quality: STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. It also employs arbitration to leverage diverse viewpoints and improve label reliability . By prioritizing the insights of those most directly affected, STAR ensures a legitimate and authoritative assessment of model failures .

  3. Methodological Innovations: STAR introduces several methodological innovations, such as expert- and demographic matching, and an arbitration step that leverages annotator reasoning . These innovations offer two key advantages: better steerability for targeted risk exploration and higher quality signals for improved red teaming effectiveness and efficiency .

  4. Reproducibility: STAR addresses the challenge of creating reproducible processes for generating comparable red teaming datasets by introducing a principled process for generating instructions . This ensures that red teaming efforts can be standardized and compared across different studies, contributing to best practices in red teaming generative AI .

  5. Comprehensive Exploration: STAR enables controlled exploration of the target area by providing parameterized instructions that delineate the risk surface, reducing redundancies and uncovering potential vulnerabilities that might be overlooked . This content-agnostic approach is adaptable to any target area, ensuring comprehensive coverage .

In summary, the STAR framework stands out for its enhanced steerability, improved signal quality, methodological innovations, reproducibility, and comprehensive exploration capabilities compared to previous red teaming methods for AI models .


Do any related researches exist? Who are the noteworthy researchers on this topic in this field?What is the key to the solution mentioned in the paper?

Several related research papers and notable researchers in the field of red teaming language models have been identified in the provided context. Noteworthy researchers include Christopher M. Homan, Greg Serapio-Garcia, Lora Aroyo, Mark Diaz, Alicia Parrish, Vinodkumar Prabhakaran, Alex S. Taylor, Ding Wang, Iason Gabriel, William Isaac, and many others . These researchers have contributed to various aspects of understanding diverse perceptions of safety in conversational AI, international perceptions of harmful content online, and the development of language models.

The key to the solution mentioned in the paper involves implementing a two-step annotator → arbitrator pipeline to model argument exchange similar to normative annotation settings. This approach ensures clear safety recommendations by obtaining annotator reasoning for judgments on model violations of rules. In cases where annotators' ratings significantly diverge, an arbitrator provides an additional rating and explanation, leading to a comprehensive understanding of safety issues in red teaming language models .


How were the experiments in the paper designed?

The experiments in the paper were designed with a focus on red teaming language models to assess their behavior and potential failures . The experiments involved testing the models against uni- and two-dimensional demographic groups to reveal nuanced failure patterns without additional costs . By comparing nested models, the study found a statistically significant increase in model fit when incorporating race-gender interaction terms, indicating that model behavior on intersectional groups is not simply the additive result of testing individual demographic labels independently . The experiments aimed to explore complex interactions within the models, particularly in relation to socially marginalized intersectionalities of non-White women . The paper introduced a novel sociotechnical approach to red teaming that leverages procedural control and expert- and demographic matching to enhance the effectiveness and efficiency of red teaming for AI .


What is the dataset used for quantitative evaluation? Is the code open source?

The dataset used for quantitative evaluation in the context is the STAR dataset, which consists of conversations produced by Red Teamers during the STAR project . The code for the dataset is open source as it was created as part of the STAR project outlined in the paper .


Do the experiments and results in the paper provide good support for the scientific hypotheses that need to be verified? Please analyze.

The experiments and results presented in the paper provide substantial support for the scientific hypotheses that needed verification. The research introduces a novel sociotechnical framework called STAR, which enhances the steerability and signal quality in red teaming safety assessments of large language models . The STAR framework makes two key contributions: it generates parameterized instructions for human red teamers, leading to improved coverage of the risk surface, and matches demographics to assess harms for specific groups, resulting in more sensitive annotations .

The experiments conducted in the paper demonstrate controlled exploration of the target area, showing broad coverage and low clustering of the STAR approach compared to other red teaming methods. Visual inspection and analysis of clusters in the embedding space reveal thematic splits between different red teaming approaches, with STAR dialogues focusing on gender stereotypes as a common theme . This indicates that the STAR method provides a more comprehensive exploration of model behaviors and failures, supporting the hypothesis that it enhances steerability and signal quality in red teaming assessments .

Moreover, the results from the experiments show that in-group annotators labeled rules as "definitely" or "probably" broken at a higher rate compared to out-group annotators for specific rules like Hate Speech and Stereotypes. Statistical analysis using t-tests revealed significant differences in how in-group and out-group annotators assessed rule violations, further supporting the hypothesis that demographic matching improves sensitivity in annotations .

Overall, the experiments and results in the paper provide strong empirical evidence supporting the effectiveness of the STAR framework in red teaming language models, validating the scientific hypotheses related to steerability, signal quality, and demographic matching in assessing model behaviors and failures .


What are the contributions of this paper?

The paper makes several key contributions:

  • It focuses on the intersectionality in conversational AI safety and how Bayesian multilevel models aid in understanding diverse perceptions of safety .
  • It delves into understanding international perceptions of harmful content online .
  • The paper discusses the challenges in evaluating AI systems and presents the DICES Dataset for diversity in conversational AI evaluation for safety .
  • It addresses the importance of living guidelines for generative AI and the necessity for scientists to oversee its use .
  • The paper explores red-teaming for generative AI, aiming to reduce harms through various methods, scaling behaviors, and lessons learned .
  • It introduces a novel sociotechnical approach to red teaming language models, emphasizing the importance of comprehensive coverage of the risk surface and the need for diverse perspectives in red teaming efforts .
  • The paper provides insights into controlled exploration of the target area, granular signal on model failures, and nuanced failure patterns when red teaming the model against uni- and two-dimensional demographic groups .

What work can be continued in depth?

To further advance the research in red teaming of language models for social harms, several areas can be explored in depth based on the provided context :

  • Enhancing Steerability: Research can focus on developing strategies to ensure comprehensive coverage of the risk surface in AI red teaming efforts. This includes addressing unintentional skews in red teaming that may result from practical factors like attacker demographics or task design. Novel approaches can be explored to prevent redundant attack clusters and identify missed vulnerabilities or blind spots.
  • Improving Diversity and Representation: Further work is needed to involve diverse groups in red teaming efforts to encompass a wider range of perspectives and experiences. This can help mitigate disproportionate risks of harm when AI systems are deployed and ensure legitimate and reliable data points. Principled approaches should be developed to account for meaningful annotator disagreement and enhance the quality of assessments.
  • Exploring Parameterized Instructions: Research can delve into the effectiveness of generating parameterized instructions for human red teamers to improve coverage of the risk surface and gain detailed insights into model failures. By integrating demographic matching and arbitration techniques, researchers can enhance the quality of signals and provide more nuanced findings about model failure modes without incurring additional costs.

Tables

4

Background
Rise of large language models and AI safety concerns
Importance of red teaming in AI assessments
Objective
To present the SocioTechnical Approach to Red Teaming (STAR)
Enhance AI safety assessments, specifically for LLMs
Address challenges and biases in red teaming
SocioTechnical Approach to Red Teaming (STAR) Framework
Overview
Parameterized instructions for testers
Comprehensive risk coverage
Arbitration for diverse viewpoints
Key Components
Content-Agnostic Parameters
Hate speech detection
Stereotypes
Demographic-specific harms
Expert and Diverse Teams
Sociotechnical methods
Involvement of experts
Arbitration Process
Improving label reliability
Challenges Addressed
Uneven attack coverage
Signal quality
Diverse perspectives
Methodology
Data Collection
8360 dialogues analyzed
Comparison of red teaming datasets
Data Preprocessing
Preparation of data for analysis
Ensuring model input quality
Study Execution
Dialogue Analysis
Examination of model performance
Identifying patterns and issues
Performance Metrics
Evaluation on hate speech, stereotypes, and demographic harms
Controlled Exploration -STAR's impact on assessment outcomes
Balanced Approach
Comparison with traditional red teaming methods
Results and Findings
Benefits of STAR
Limitations and areas for improvement
Model performance enhancements
Conclusion
Summary of STAR's contributions to AI safety
Future directions and implications
Call for broader adoption in AI assessments
Basic info
papers
computation and language
human-computer interaction
computers and society
artificial intelligence
Advanced features
Insights
What are the key challenges and dimensions examined in the study of large language models using STAR?
What steps does the arbitration process in STAR involve?
How does STAR address demographic biases in AI safety assessments?
What is the primary goal of the SocioTechnical Approach to Red Teaming (STAR)?

STAR: SocioTechnical Approach to Red Teaming Language Models

Laura Weidinger, John Mellor, Bernat Guillen Pegueroles, Nahema Marchal, Ravin Kumar, Kristian Lum, Canfer Akbulut, Mark Diaz, Stevie Bergman, Mikel Rodriguez, Verena Rieser, William Isaac·June 17, 2024

Summary

The paper presents the SocioTechnical Approach to Red Teaming (STAR), a framework designed to enhance AI safety assessments, particularly for large language models. STAR improves red teaming by providing parameterized instructions for testers, ensuring comprehensive coverage of risks, and addressing demographic biases. It introduces an arbitration step to leverage diverse viewpoints, increasing label reliability. The study focuses on challenges like uneven attack coverage, signal quality, and ensuring diverse perspectives. STAR's content-agnostic approach targets hate speech, stereotypes, and demographic-specific harms, using sociotechnical methods involving experts and diverse teams. The research involves collecting and analyzing data from 8360 dialogues, comparing different red teaming datasets, and examining the model's performance on various dimensions. The study highlights the benefits of STAR's controlled exploration, balanced approach, and the need for broader coverage in AI safety assessments.
Mind map
Diverse perspectives
Signal quality
Uneven attack coverage
Involvement of experts
Sociotechnical methods
Demographic-specific harms
Stereotypes
Hate speech detection
Balanced Approach
Controlled Exploration
-STAR's impact on assessment outcomes
Identifying patterns and issues
Examination of model performance
Ensuring model input quality
Preparation of data for analysis
Comparison of red teaming datasets
8360 dialogues analyzed
Challenges Addressed
Improving label reliability
Arbitration Process
Expert and Diverse Teams
Content-Agnostic Parameters
Arbitration for diverse viewpoints
Comprehensive risk coverage
Parameterized instructions for testers
Call for broader adoption in AI assessments
Future directions and implications
Summary of STAR's contributions to AI safety
Model performance enhancements
Limitations and areas for improvement
Benefits of STAR
Comparison with traditional red teaming methods
Evaluation on hate speech, stereotypes, and demographic harms
Performance Metrics
Dialogue Analysis
Data Preprocessing
Data Collection
Key Components
Overview
Conclusion
Results and Findings
Study Execution
Methodology
SocioTechnical Approach to Red Teaming (STAR) Framework
Outline
Background
Rise of large language models and AI safety concerns
Importance of red teaming in AI assessments
Objective
To present the SocioTechnical Approach to Red Teaming (STAR)
Enhance AI safety assessments, specifically for LLMs
Address challenges and biases in red teaming
SocioTechnical Approach to Red Teaming (STAR) Framework
Overview
Parameterized instructions for testers
Comprehensive risk coverage
Arbitration for diverse viewpoints
Key Components
Content-Agnostic Parameters
Hate speech detection
Stereotypes
Demographic-specific harms
Expert and Diverse Teams
Sociotechnical methods
Involvement of experts
Arbitration Process
Improving label reliability
Challenges Addressed
Uneven attack coverage
Signal quality
Diverse perspectives
Methodology
Data Collection
8360 dialogues analyzed
Comparison of red teaming datasets
Data Preprocessing
Preparation of data for analysis
Ensuring model input quality
Study Execution
Dialogue Analysis
Examination of model performance
Identifying patterns and issues
Performance Metrics
Evaluation on hate speech, stereotypes, and demographic harms
Controlled Exploration -STAR's impact on assessment outcomes
Balanced Approach
Comparison with traditional red teaming methods
Results and Findings
Benefits of STAR
Limitations and areas for improvement
Model performance enhancements
Conclusion
Summary of STAR's contributions to AI safety
Future directions and implications
Call for broader adoption in AI assessments
Key findings
10

Paper digest

What problem does the paper attempt to solve? Is this a new problem?

The paper aims to address the challenges in red teaming safety of large language models by introducing a sociotechnical framework called STAR. This framework enhances steerability by providing parameterized instructions for human red teamers, leading to improved coverage of the risk surface and more detailed insights into model failures without increased costs. It also improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations . This is a new problem as the paper highlights the lack of consensus on best practices in red teaming, hindering the progress of safety research in AI and making it challenging for the public to assess AI safety .


What scientific hypothesis does this paper seek to validate?

This paper seeks to validate the scientific hypothesis that the STAR framework enhances red teaming safety for large language models by improving steerability through generating parameterized instructions for human red teamers and improving signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations .


What new ideas, methods, or models does the paper propose? What are the characteristics and advantages compared to previous methods?

The paper "STAR: SocioTechnical Approach to Red Teaming Language Models" introduces a novel framework that enhances red teaming safety for large language models by improving steerability and signal quality . The framework makes two key contributions:

  1. Enhanced Steerability: STAR enhances steerability by providing parameterized instructions for human red teamers, ensuring comprehensive coverage of the risk surface and detailed insights into model failures without increased costs .
  2. Improved Signal Quality: STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. It also employs arbitration to leverage diverse viewpoints and improve label reliability .

The paper addresses challenges in red teaming by integrating parametric instructions with demographic matching and arbitration techniques . These interventions enable comprehensive exploration of a model's risk surface and provide high-quality signals. The paper also introduces a principled process for generating instructions, aiding in creating reproducible processes for generating comparable red teaming datasets .

Furthermore, the paper demonstrates that STAR can target specific risk areas effectively, leading to nuanced findings about model failure modes without additional costs . By providing structured, parameterized instructions, STAR allows for intentional control over the target area without resulting in higher clustering of resulting dialogues. This approach enables more nuanced coverage of failure modes, revealing insights into social marginalization and discriminatory stereotypes . The "STAR: SocioTechnical Approach to Red Teaming Language Models" paper introduces a novel framework that offers several key characteristics and advantages compared to previous methods in red teaming for AI models .

  1. Enhanced Steerability: STAR enhances steerability by providing parameterized instructions for human red teamers, ensuring comprehensive coverage of the risk surface and detailed insights into model failures without increased costs . This approach allows for intentional control over the target area without resulting in higher clustering of resulting dialogues, enabling more nuanced coverage of failure modes .

  2. Improved Signal Quality: STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. It also employs arbitration to leverage diverse viewpoints and improve label reliability . By prioritizing the insights of those most directly affected, STAR ensures a legitimate and authoritative assessment of model failures .

  3. Methodological Innovations: STAR introduces several methodological innovations, such as expert- and demographic matching, and an arbitration step that leverages annotator reasoning . These innovations offer two key advantages: better steerability for targeted risk exploration and higher quality signals for improved red teaming effectiveness and efficiency .

  4. Reproducibility: STAR addresses the challenge of creating reproducible processes for generating comparable red teaming datasets by introducing a principled process for generating instructions . This ensures that red teaming efforts can be standardized and compared across different studies, contributing to best practices in red teaming generative AI .

  5. Comprehensive Exploration: STAR enables controlled exploration of the target area by providing parameterized instructions that delineate the risk surface, reducing redundancies and uncovering potential vulnerabilities that might be overlooked . This content-agnostic approach is adaptable to any target area, ensuring comprehensive coverage .

In summary, the STAR framework stands out for its enhanced steerability, improved signal quality, methodological innovations, reproducibility, and comprehensive exploration capabilities compared to previous red teaming methods for AI models .


Do any related researches exist? Who are the noteworthy researchers on this topic in this field?What is the key to the solution mentioned in the paper?

Several related research papers and notable researchers in the field of red teaming language models have been identified in the provided context. Noteworthy researchers include Christopher M. Homan, Greg Serapio-Garcia, Lora Aroyo, Mark Diaz, Alicia Parrish, Vinodkumar Prabhakaran, Alex S. Taylor, Ding Wang, Iason Gabriel, William Isaac, and many others . These researchers have contributed to various aspects of understanding diverse perceptions of safety in conversational AI, international perceptions of harmful content online, and the development of language models.

The key to the solution mentioned in the paper involves implementing a two-step annotator → arbitrator pipeline to model argument exchange similar to normative annotation settings. This approach ensures clear safety recommendations by obtaining annotator reasoning for judgments on model violations of rules. In cases where annotators' ratings significantly diverge, an arbitrator provides an additional rating and explanation, leading to a comprehensive understanding of safety issues in red teaming language models .


How were the experiments in the paper designed?

The experiments in the paper were designed with a focus on red teaming language models to assess their behavior and potential failures . The experiments involved testing the models against uni- and two-dimensional demographic groups to reveal nuanced failure patterns without additional costs . By comparing nested models, the study found a statistically significant increase in model fit when incorporating race-gender interaction terms, indicating that model behavior on intersectional groups is not simply the additive result of testing individual demographic labels independently . The experiments aimed to explore complex interactions within the models, particularly in relation to socially marginalized intersectionalities of non-White women . The paper introduced a novel sociotechnical approach to red teaming that leverages procedural control and expert- and demographic matching to enhance the effectiveness and efficiency of red teaming for AI .


What is the dataset used for quantitative evaluation? Is the code open source?

The dataset used for quantitative evaluation in the context is the STAR dataset, which consists of conversations produced by Red Teamers during the STAR project . The code for the dataset is open source as it was created as part of the STAR project outlined in the paper .


Do the experiments and results in the paper provide good support for the scientific hypotheses that need to be verified? Please analyze.

The experiments and results presented in the paper provide substantial support for the scientific hypotheses that needed verification. The research introduces a novel sociotechnical framework called STAR, which enhances the steerability and signal quality in red teaming safety assessments of large language models . The STAR framework makes two key contributions: it generates parameterized instructions for human red teamers, leading to improved coverage of the risk surface, and matches demographics to assess harms for specific groups, resulting in more sensitive annotations .

The experiments conducted in the paper demonstrate controlled exploration of the target area, showing broad coverage and low clustering of the STAR approach compared to other red teaming methods. Visual inspection and analysis of clusters in the embedding space reveal thematic splits between different red teaming approaches, with STAR dialogues focusing on gender stereotypes as a common theme . This indicates that the STAR method provides a more comprehensive exploration of model behaviors and failures, supporting the hypothesis that it enhances steerability and signal quality in red teaming assessments .

Moreover, the results from the experiments show that in-group annotators labeled rules as "definitely" or "probably" broken at a higher rate compared to out-group annotators for specific rules like Hate Speech and Stereotypes. Statistical analysis using t-tests revealed significant differences in how in-group and out-group annotators assessed rule violations, further supporting the hypothesis that demographic matching improves sensitivity in annotations .

Overall, the experiments and results in the paper provide strong empirical evidence supporting the effectiveness of the STAR framework in red teaming language models, validating the scientific hypotheses related to steerability, signal quality, and demographic matching in assessing model behaviors and failures .


What are the contributions of this paper?

The paper makes several key contributions:

  • It focuses on the intersectionality in conversational AI safety and how Bayesian multilevel models aid in understanding diverse perceptions of safety .
  • It delves into understanding international perceptions of harmful content online .
  • The paper discusses the challenges in evaluating AI systems and presents the DICES Dataset for diversity in conversational AI evaluation for safety .
  • It addresses the importance of living guidelines for generative AI and the necessity for scientists to oversee its use .
  • The paper explores red-teaming for generative AI, aiming to reduce harms through various methods, scaling behaviors, and lessons learned .
  • It introduces a novel sociotechnical approach to red teaming language models, emphasizing the importance of comprehensive coverage of the risk surface and the need for diverse perspectives in red teaming efforts .
  • The paper provides insights into controlled exploration of the target area, granular signal on model failures, and nuanced failure patterns when red teaming the model against uni- and two-dimensional demographic groups .

What work can be continued in depth?

To further advance the research in red teaming of language models for social harms, several areas can be explored in depth based on the provided context :

  • Enhancing Steerability: Research can focus on developing strategies to ensure comprehensive coverage of the risk surface in AI red teaming efforts. This includes addressing unintentional skews in red teaming that may result from practical factors like attacker demographics or task design. Novel approaches can be explored to prevent redundant attack clusters and identify missed vulnerabilities or blind spots.
  • Improving Diversity and Representation: Further work is needed to involve diverse groups in red teaming efforts to encompass a wider range of perspectives and experiences. This can help mitigate disproportionate risks of harm when AI systems are deployed and ensure legitimate and reliable data points. Principled approaches should be developed to account for meaningful annotator disagreement and enhance the quality of assessments.
  • Exploring Parameterized Instructions: Research can delve into the effectiveness of generating parameterized instructions for human red teamers to improve coverage of the risk surface and gain detailed insights into model failures. By integrating demographic matching and arbitration techniques, researchers can enhance the quality of signals and provide more nuanced findings about model failure modes without incurring additional costs.
Tables
4
Scan the QR code to ask more questions about the paper
© 2025 Powerdrill. All rights reserved.