Next Article in Journal
A Method for Measuring Systems Thinking Learning
Next Article in Special Issue
A Wargame-Augmented Knowledge Elicitation Method for the Agile Development of Novel Systems
Previous Article in Journal
Operationalizing Accounting Reporting in System Dynamics Models
Previous Article in Special Issue
Would You Fix This Code for Me? Effects of Repair Source and Commenting on Trust in Code Repair
 
 
Article
Peer-Review Record

Applying Control Abstraction to the Design of Human–Agent Teams

by Clifford D. Johnson, Michael E. Miller *, Christina F. Rusnock and David R. Jacques
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 25 February 2020 / Revised: 10 April 2020 / Accepted: 10 April 2020 / Published: 12 April 2020
(This article belongs to the Special Issue Human Factors in Systems Engineering)

Round 1

Reviewer 1 Report

General comments

In this manuscript the authors present a logical organization of automation levels. The concepts here explained are mostly applied to unmanned vehicles, but may serve to other types of automatable devices. This work is an extension of a previously presented paper. In addition to reviewing this document, I went over the first one and verified the connection between the two works. I can state here both documents are coherent and adequately complement each other.

The background revision, along with the references presented, is properly addressed. The whole document is very well organized a reading it was easy since the presentation attracts the reader’s interest. I liked it.

Minor details

In the Abstract, I think there is a “s” missing.

 reduced, the level of required human attention and 20 required cognitive resources decreases.

I like the way Table 3 works. I would change only minor details: 1. No more than three dots (the operator …. ). 2. Add three dots before starting any Description of Interaction. ( …provides desired actions, the system ) and a little additional interline space after each paragraph. This will add important readability to the table.

Make sure that in Table 4 and line 343 DKI is correct. It may be DJI phantom (instead of DKI)

Final comments:

The manuscript is well written and excellently organized. The language is flawless and the defence of their arguments is effectively exposed with excellently selected examples.   I recommend it for publication in Systems.

Author Response

In the Abstract, I think there is a “s” missing…reduced, the level of required human attention and 20 required cognitive resources decreases.

We thank the reviewer for catching this error.

I like the way Table 3 works. I would change only minor details: 1. No more than three dots (the operator …. ). 2. Add three dots before starting any Description of Interaction. ( …provides desired actions, the system ) and a little additional interline space after each paragraph. This will add important readability to the table.

We agree with the reviewer.  Additionally we added 0.03” additional space at the bottom of each cell in the table to create a little more separation to further improve readability.

Make sure that in Table 4 and line 343 DKI is correct. It may be DJI phantom (instead of DKI)

The reviewer is correct and we have corrected this error.

Final comments:

The manuscript is well written and excellently organized. The language is flawless and the defence of their arguments is effectively exposed with excellently selected examples.   I recommend it for publication in Systems.

We thank the reviewer very much for their supportive comments.

Reviewer 2 Report

Applying Control Abstraction to the Design of Human-Agent Teams is an interesting and very useful area of research. Many believe incorrectly, that machine intelligent robots will take over every aspect of our world, without any human input... The article identifies these thoughts correctly, however, there are two key areas, that must be significantly improved. These being:

  1. The entire framework and analysis must have risk analysis as an overall system control on every decision created within, or with this framework. This is missing. It must be added and the framework must be fundamentally redesigned accordingly.
  2. Without quantification, calibration of data, such a framework cannot be validated in an individual case. If it cannot be validated, then it cannot be used for controlling decisions in a safe manner. (Similar to the lack of risk analysis gap; see above.) This means, that any decision can be accepted! This is surely incorrect!

I sincerely hope, that the authors will carry on their research in the above direction. Unfortunately I cannot support this article for publication at this stage due to the above explained issues.

Author Response

We acknowledge the reviewer’s request to include an analysis of risk within this framework.  We do note, however, that risk has not been addressed in the frameworks which we have provided as background and therefore we do not agree with the reviewer that risk analysis is a required to evaluate frameworks for human-machine teaming.  Further, many problems in systems engineering are ill defined and provide measurement challenges.  In many of these contexts, the literature clearly supports the use of qualitative, in addition to, or in the place of, quantitative measures.  Therefore, we do not agree that quantitative measures are necessary to validate this framework.  We have attempted to provide arguments in support of this framework which are consistent with those found in the background literature and we will continue to seek both additional qualitative and quantitative methods to evaluate this framework as we progress this research.

We do, however, thank the reviewer for their comment as we agree that risk and risk analysis is a very important concept in human-machine teaming and deserves to be addressed within the current manuscript.  Therefore, we have edited the manuscript to discuss risk where appropriate.  Specifically, we have added some description of risk or likelihood of failure to the description of each LHCA (see sections 2.2.1 through 2.2.5), as well as within the discussion.  We note, however, that there is not a monotonic relationship between risk and LHCA as provided within this manuscript.  For some systems, particularly those which lack stability, such as flying wing designs or rotorcraft, the control inputs required for safe flight often exceed human capacity.  Therefore, we note that in these systems risk is commonly reduced by allocating control at the innermost control loop level to the automation.  In systems where the automation is not fully reliable and well-trained, alert, and capable operators are available, risk is likely reduced by allocating higher levels of control to the human operator. However, during conditions in which the operators lack training, have degraded alertness, or are task saturated; risk may be reduced by allocating higher levels of control to the automation even when this automation is not fully reliable.  Therefore, risk analysis requires a time-varying assessment of relative capacity of the human and machine to perform control at each level within this framework.

Round 2

Reviewer 2 Report

Unfortunately I still strongly believe, that since you are dealing with humans interacting with machines, quantifiable risk analysis must be a core principle and process in the fundamental architecture / framework, as well as in customized applications. Therefore I suggest to redesign the fundamental architecture / framework and include risk analysis as an integral part.

Author Response

We understand that the reviewer believes that quantifiable risk analysis should be a core principle in the proposed framework.  Once again, the existing frameworks within this space have not addressed risk  and therefore, we are unsure of the basis for the reviewer’s recommendation.  Further, quantitative risk analysis has not been applied extensively in human-machine interaction.  It is true that reliability analysis has been augmented with human reliability analysis to provide estimates of reliability in systems which include human machine interaction.  We accept that it might be possible to extend this work to attempt to quantify risk of human error within these systems.  However, this technique is not commonly applied in human-machine interaction systems and risk in any application is likely to be highly dependent upon the human’s level of expertise, alertness, mental workload, and level of distraction, all of which are difficult to analyze quantitatively.  Similar statements can be made about the reliability and therefore, the implied risk of the automation as a function of situation. As noted in the earlier revision, we have revised our description of the framework to include a discussion of risk in response to the reviewer’s comments.

Back to TopTop