1. Introduction
In the realm of agriculture, the proliferation of unwanted weeds constitutes a well-acknowledged problem [
1,
2], even if the potential use and valorization of some weeds is increasingly under investigation [
3]. Weeds flourishing alongside crop plants engender a state of interspecific competition for vital resources such as water, nutrients, space, and sunlight [
4]. As a result, crop yields and, consequently, the financial viability of the farm can become significantly reduced [
2]. To combat this issue, various weed management strategies have been devised, including both mechanical and chemical methods. While mechanical weed control is often viewed as being more eco-friendly, it is generally less efficacious than chemical control [
5,
6]. However, the latter is often criticized for its detrimental impact on the environment, including causing harm to soil and water organisms as well as human health [
2]. In order to mitigate the disadvantages associated with extensive weed control, site-specific weed management (SSWM) was developed in the 1990s, coming from an understanding that weeds usually do not occur uniformly, but rather in locally varying intensities [
7]. Weed control strategies that align with this spatial variability can be implemented to attenuate ecological damage and axe costs [
5,
8,
9].
For the successful implementation of SSWM, it is necessary to obtain knowledge on the spatial distribution of weeds prior to the initiation of control measures. In this regard, the utilization of unmanned aerial vehicles (UAVs) provides an expedient solution due to its ability to generate high-resolution images in a relatively time-efficient manner [
10]. These vehicles can be equipped with various camera systems, ranging from RGB to hyperspectral capturing systems. Due to their low flight altitude, spatial resolutions of well below 1 cm can be achieved. The high-resolution images procured through UAV flights enable various approaches to weed detection. For instance, weeds can be detected on a pixel-by-pixel basis, with each pixel being classified as either a weed or a non-weed. Additionally, an object-based image analysis (OBIA) has gained popularity for weed mapping [
11,
12]. In this approach, the image is first segmented into semantic units, and based on these, features are extracted that provide information on the spectral and textural properties of the segments. By classifying entire segments instead of single pixels, the classification result is typically much more homogeneous and often of higher accuracy [
11,
12]. In addition to these classic image segmentation methods, recently evolved deep learning approaches offer versatile opportunities for weed detection that extend beyond the scope of this study [
13,
14]: Object detection techniques have proven efficacious when the primary objective is not the precise delineation of weed contours, but rather the recognition of their mere presence. Instance segmentation complements conventional image segmentation, as conducted in this study, by additionally recognizing individual plants. However, this added capability comes at the cost of higher hardware, time, and training demands [
14,
15], rendering it impractical for large-scale weed mapping in extensive agricultural fields, as aimed for in this study.
Effective weed detection requires not only a methodology, but also a data basis aligned to the specific needs of a project. While increasing the spectral resolution can lead to the more accurate differentiation of species, studies examining the effect of spatial resolution have shown varying results, with some finding better results at comparatively higher resolutions [
16,
17] and others finding better results at comparatively lower resolutions [
18,
19,
20]. As the aforementioned studies demonstrate, the relationship between spatial resolution and accuracy is not straightforward and depends on the object under investigation. However, to the best of our knowledge, only three articles exist to date that have systematically investigated spatial resolution in weed-detection scenarios, and they obtained contradictory results. Peña et al. [
21] compared different resolutions for weed detection in sunflower fields using an object-based approach, and found that the detection accuracy consistently decreased with a diminishing resolution. López-Granados et al. [
22] found a similar trend when detecting Johnsongrass in maize, observing the best results at the highest resolution. Sanders et al. [
23] investigated the effect of different growth stages, plant densities, and flight altitudes on the spectral separability of soybeans and Palmer amaranth. The authors found complex relationships between these variables, but, in contrast to Peña et al. [
21] and López-Granados et al. [
22], they stated that the resolution had no impact on the spectral separability or the overall classification accuracy. The small number of studies, accompanied by conflicting results, leaves a research gap of high relevance for UAV-based weed mapping. Specifically for soybeans, the question arises as to what impact the spatial resolution has on the accuracy of weed mapping. The significance of this question is not the least reflected in practice, as spatial resolution is directly connected to the flight altitude and flight times, affecting the requisite work time, battery charging plans, and ultimately, the agronomic productivity. Against this backdrop, relating attainable accuracies with time requirements becomes vital to optimize future UAV missions for efficiency.
In light of these considerations, this case study aims to further investigate the effect of spatial resolution on the accuracy of weed detection in a soybean field. For this purpose, resolutions according to the altitudes of 10, 20, 40, and 80 m were examined. Multinomial logistic regression (MLR) was performed to classify the resulting RGB images using both pixel- and object-based approaches. This study explored the extent to which the accuracies of pixel- and object-based classification differ from one another and at different resolutions. Additionally, this study evaluated the so-far-disregarded aspect of how flight and processing times vary for different resolutions, and how they relate to the achieved accuracies.
2. Materials and Methods
2.1. Study Area
The soybean field investigated for this study was located in Belm, Lower Saxony, Germany (LAT: 52.3217, LON: 8.1584, reference system: WGS84) and was characterized by a high general occurrence of weeds. However, due to the partial use of herbicides in the previous year, the weed occurrence varied greatly within the field. While certain rows exhibited a complete absence of weeds, other rows were heavily infested. Two sub-areas were selected for the study, as shown in
Figure 1. One for the purpose of collecting training data (768.6 m
2) and the other for validating the classifications (981.6 m
2), with an effort to ensure a comparable soybean and weed distribution for both areas while maintaining spatial independence as much as possible. The soybean cultivar
Abelina was present in both areas (BBCH stage 14), and the most prevalent weed by far in the study area was white goosefoot (
Chenopodium album L.). Moreover, there was a sparse occurrence of weeds in the form of black nightshade (
Solanum nigrum L.), chickweed (
Stellaria media (L.) Vill.), and thistles (
Sonchus arvensis L. and
Cirsium arvense (L.) Scop.), as well as individual cockspur (
Echinochloa crus-galli (L.) P.Beauv.) and grass shoots.
2.2. Image Acquisition and Preprocessing
RGB imagery of the study areas was taken on June 8, 2022, using a DJI Phantom 4 RTK drone that was connected to a D-RTK 2 mobile station. Images were captured at altitudes of 10, 20, 40, and 80 m, resulting in ground-sampling distances (GSD) of 0.27, 0.55, 1.10, and 2.19 cm. The flights were conducted around noon, with an 80 percent longitudinal and 80 percent lateral lap during partly cloudy weather conditions. The images were processed in Agisoft Metashape (alignment of orthophotos, generation of matching points, inference of a digital elevation model, and generation of an orthomosaic). For the assessment of time efficiency, the flight and preprocessing times were recorded (see
Section 2.7).
In order to eliminate potential distortions caused by positional inaccuracies when comparing the classification results (see
Section 2.6), classifications were performed using resampled variants of the 10 m image rather than the original 20, 40, and 80 m images. Even with a high georeferencing accuracy, slight changes in leaf, shadow, and UAV positions and lighting are inevitable. To exclude these confounding factors from the examination of spatial resolution effects, a resampling approach was preferred. Accordingly, the 10 m image was resampled to 20, 40, and 80 m images by pixel aggregation. In this process, the pixel values of the highest available resolution were averaged to form larger units. Thus, the pixel values of the simulated 20 m flight altitude corresponded to the mean of the four underlying pixels of the 10 m image. The pixel values of the simulated 40 and 80 m image were accordingly based on the mean values of 16 and 64 pixels, respectively, of the original 10 m image. In summary, the resampled 10 m orthomosaic was used for classification, while the flight and preprocessing time of the original imagery was used for a time assessment. An overview of the succeeding procedure, including the training, classification, and accuracy assessment, is provided in
Figure 2.
2.3. Generating Training Data
For the supervised classification process delineated in the following sections, 200 sample points were created for each of the following three classes: soil, soy, and weeds. These were used to extract training pixels for each resolution level. Care was taken to account for the spectral variations in the classes and to avoid using points that became mixed pixels at a low resolution as training data, as well as using points that were too proximate in location to avoid having two points in one segment for the object-based classification.
2.4. Pixel-Based Classification
In pixel-based classification, each pixel is assigned to one of the three classes,
soil,
soy, or
weeds, based on its red, green, and blue reflectance values. To accomplish this, a multinomial logistic regression (MLR) model was employed. MLR is a special form of logistic regression in which the dependent variable is nominally scaled and can have more than two expressions. It is also known as polytomous logistic regression, softmax regression, multinomial logit (mlogit), and the maximum entropy (MaxEnt) classifier. The probability of a pixel belonging to a class
is determined by the explanatory variables
and their coefficients
[
24]:
Newton’s method was used to iteratively optimize the coefficients [
25]. While MLR may yield a lower accuracy compared to other classifiers, such as the support vector machine or artificial neural networks [
26,
27], it has the advantage of not requiring any hyperparameter tuning. Given that the primary focus of this study was to investigate the effect of spatial resolution on accuracy, a hyperparameter-free method such as MLR was preferred, as it eliminates the potential impact of the inevitably suboptimally tuned parameters on the results. MLR was preferred to other hyperparameter-free classification methods, such as the maximum likelihood classification or a linear discriminant analysis, because it does not make any assumptions about the data structure (such as a normal distribution, homoscedasticity, multicollinearity, etc.), making it in theory more suitable for this specific application, since non-normally distributed and highly correlated band values are to be expected for the RGB imagery.
2.5. Object-Based Classification
In contrast to the pixel-based approach, object-based classification does not classify individual pixels, but segments composed of several pixels that ideally align to objects in the image (here: mainly plant leaves). Several features (spectral, textural, shape, and neighborhood) can subsequently be extracted from the segments. For the segmentation of the underlying images, the SLICO algorithm was chosen, which has been proven effective in remote sensing applications [
28,
29,
30]. SLICO is a zero-parameter variant of the simple linear iterative clustering (SLIC) algorithm. SLIC itself is an adapted k-means clustering method in which pixels are clustered by spectral similarity. In the SLIC method, however, the clusters are not generated globally over the entire image, but only locally within a certain spatial perimeter. Since each pixel does not have to be compared with each cluster center as in k-means, it is much less computationally intensive [
31,
32]. SLICO differs from SLIC in that no prior assumption about the compactness of the clusters needs to be made and superpixels are formed with a comparatively simple geometry [
32].
Upon the completion of segmentation, feature extraction based on statistical metrics was executed. The mean, standard deviation, skewness, and kurtosis of the three spectral bands were extracted for each segment, resulting in a 12-dimensional feature space. Utilizing MLR, the study area was classified based on these 12 features into the classes of soil, soy, and weeds.
Even though SLICO is often referred to as parameter-free, it requires the number of clusters to be generated as part of the segmentation process. Since the number of segments (and thus, the average segment size) can have a significant impact on the classification result [
32,
33], 10-fold cross-validation was conducted on the training data to find the approximately optimal average segment sizes, testing average segment sizes of 5 × 5, 10 × 10, 15 × 15, 20 × 20, and 25 × 25 pixels. Based on the cross-validation results, final segment sizes were chosen for each flight level.
2.6. Accuracy Assessment
To determine the classification accuracies of each flight level/resolution, a dataset of 1000 validation points was created. Although weed detection was the focus of the study, weeds themselves accounted for the smallest proportion of the study area, while the soil accounted for over 50 percent of the area. Simple random sampling would, therefore, have led to a situation in which only a small proportion of the validation points would be allocated to weeds and the certainty regarding the weed detection accuracy would be limited. To mitigate this concern, but still allow for randomly distributed sampling, a staged approach to the distribution of validation points was chosen. Therefore, the study area was first divided into soil and vegetation using the green leaf index (
GLI), with a threshold value of 0.1 [
34,
35]:
A total of 200 points were randomly assigned to soil areas (GLI ≤ 0.1), while the remaining 800 points were assigned to vegetated areas (GLI > 0.1). The resulting points were then labeled. This resulted in a total of 245 soil, 430 soy, and 325 weed points. From these validation points, the overall (OA), producer’s (PA), and user’s accuracies (UA) with 95 percent confidence intervals were calculated for the classification results. Because of the stratified nature of the sampling, the class proportions of the validation points were inconsistent with the true class proportions of the respective image. Therefore, OA, PA, and UA were area-adjusted according to Oloffson et al. [
36]. Additionally, to facilitate the comparison of class accuracies, the F1 score for each class was calculated as the harmonic mean of PA and UA.
To further evaluate the differences in the classification results based on different resolutions, McNemar’s test was used. While comparing the confidence intervals of the classification accuracies gives an insight about the statistical significance of differences in the accuracy, McNemar’s test assesses whether and to which degree the classification results themselves differ (based on related samples) [
37,
38,
39]. In contrast to the standalone accuracy confidence intervals, McNemar’s test compares two results in terms of commonly correctly, incorrectly, and differently classified pixels. Since, with 1000 validation points, a relatively large dataset is available, McNemar’s test was calculated based on a chi
2 distribution, with Edward’s continuity correction applied [
40]. With this test, the significance of difference of pixel- and object-based classifications per altitude and the difference between flight altitudes within the pixel- and object-based approach were assessed.
2.7. Time Comparison
In order to compare the time required for each classification, the flight time, the processing time in Agisoft Metashape and, if applicable, the segmentation time were measured. The time for collecting training points was disregarded, as it was constant for all the procedures and subject to the user’s individual speed, making it not reproducible. The time needed for training the classifier and classifying the image was also omitted, as it was negligibly low (well below 1 s on a modern computer). Since the flight and processing times strongly depend on individual circumstances (area covered, hardware) and are thus hard to compare, the time was normalized (pixel-based classification of 10 m image corresponds to 100 percent) and, therefore, given in relations.
2.8. Software
The preprocessing of the imagery (aligning single images taken by UAV to an orthomosaic for further analysis) was carried out using Agisoft Metashape 1.5.1. The selection and labeling of training samples were performed in ArcGIS Pro 2.8. The classification procedures were implemented in Python, utilizing the scikit-learn (version 1.0.2) and scikit-image (version 0.19.3) libraries. The computation of segment features for the object-based classification, including the mean, standard deviation, skewness, and kurtosis, was executed using the numpy (version 1.12.6) and scipy (version 1.8.0) libraries. For time measurements, the timeit library integrated in Python 3.10 was used. Processing times in Agisoft Metashape were obtained from the generated reports.
4. Discussion
4.1. General Assessment of Classification Results
Firstly, the results demonstrate that weed mapping utilizing RGB imagery is viable in the present scenario (
Abelina soybeans with primarily white goosefoot weeds). The achieved accuracies, an OA of 0.79–0.93 for pixel-based classification and 0.75–0.87 for object-based classification, were within the typical range that can be found in the literature, despite the comparatively simple design of the classification approaches. Grey et al. [
41] attained accuracies of 80 percent for a similarly designed three-class classification into soil, soybeans, and weeds based on multispectral data and a maximum likelihood classification. Sanders et al. [
23] achieved accuracies of up to 90 percent using multispectral UAV data and a maximum likelihood classification to distinguish weeds from soy. Sivakumar et al. [
42] performed weed detection in a soybean field based on RGB data taken at a 20 m flight altitude (0.5 cm GSD), utilizing convolution neural networks (CNNs) for object detection and achieving a mean intersection over union of up to 0.85. Xu et al. [
43], using a CNN-based segmentation approached, attained up to a 97.2 percent accuracy for weed segmentation in soybean fields. This range of accuracy is largely consistent with weed mapping in other types of crops, such as maize, where Goa et al. [
44] achieved accuracies of up to 94.5 percent using a semi-automatic OBIA approach with random forest and Peña et al. [
45], also utilizing an OBIA approach on a multispectral basis, attained an overall accuracy of 86 percent.
Despite the generally satisfactory accuracy level attained by our efficiency-oriented, classical machine learning approach, studies have demonstrated that the accuracy could likely be further enhanced by deploying deep learning methodologies. Dos Santos Ferreira et al. [
46] achieved superior accuracies (99 percent) by utilizing a CNN compared to classical machine learning algorithms such as random forest, support vector machines, and AdaBoost (94 to 97 percent) for weed classification within soybean fields. Slightly lower accuracies, yet consistent trends, were observed by Zhang et al. [
47], who attained an OA of 96.88 percent in weed classification in pastures using CNN, in contrast to SVM’s accuracy of 89.4 percent. While the potential for higher accuracies exists, it is important to consider the increased resource consumption. Practical users must contemplate the economic justifiability of this increase in accuracy against the background of a limited efficiency concerning training, time, and hardware requirements [
15].
4.2. Relationship between Spatial Resolution and Accuracy
The classification results highlight a distinct relationship between flight altitude and accuracy. For both pixel-based and object-based classification, the accuracy diminished significantly with a decreasing resolution. In particular, the F1 scores of the weeds demonstrated that the accuracy of object-based classification experienced a significant decline between the altitudes of 10 and 20 m. In contrast, the weed accuracy of the pixel-based classification dropped later between 20 and 40 m. Therefore, if comprehensive weed mapping that can also detect small weed patches is desired, flying at altitudes of 10 or 20 m and performing pixel-wise classification would be suitable. When only a broad overview of the study area is required, the flight can be conducted between 40 and 80 m. These lower resolutions are particularly appropriate when the weeds occur in extensive patches, as this ensures the availability of adequate training pixels and minimizes the weakness of not detecting small weed patches, which is less crucial in this context.
This work ascertains a relationship between resolution and accuracy, corresponding to the findings of Peña et al. [
21] and López-Granados et al. [
22]. Pena et al. [
21] discovered a similar decline in accuracy while detecting weeds in sunflower fields with an OBIA approach. In that study, RGB images were captured at 40, 60, 80, and 100 m and different growth stages. Peña et al. [
21] discerned a greater difference in the accuracy between 40 and 80 m compared to the findings of this present study. However, it has to be considered that, even though in Peña et al. [
21] and in this work, the same altitudes of 40 and 80 m were investigated, the corresponding ground sampling distances differed due to different camera systems (40 m: 1.10 (own) vs. 1.52 cm (Peña et al. [
21]); 80 m: 2.19 (own) vs. 3.04 cm (Peña et al. [
21])). On top of this, the differing classification methodologies and study environments impede a one-to-one comparison of the outcomes. López-Granados et al. [
22] observed a similar trend when detecting Johnsongrass in maize, also using an OBIA approach. In this study, flights were carried out with an RGB camera at flight altitudes of 30, 60, and 100 m, corresponding to a resolution of 1.14 to 3.8 cm. For all the fields investigated, the accuracy dropped with diminishing resolution. In contrast to these two studies, Sanders et al. [
23] did not recognize this relationship of decreasing accuracies with increasing altitudes when detecting Palmer amaranth in soybean fields. During the growth phase, images were collected at 15, 30, and 45 m. Considering the authors’ statement that GSD at 120 m is 8.2 cm, these flight altitudes should correspond to resolutions of 1.03, 2.05, and 3.08 cm, respectively. Soybean and Palmer amaranth were distinguished by a two-class maximum likelihood classification. Despite investigating similar resolutions to the aforementioned and this present study, no discernible effect of spatial resolution on the classification accuracy was observed.
4.3. Relationship between Accuracy and Time
Since spatial resolution is directly linked with time requirements, significant amounts of time can be saved by reducing the resolution/increasing the altitude, at the cost of decreasing the accuracy. While the 10 m classification offered the highest accuracy, albeit not statistically significant, it took almost four times longer than the 20 m classification. Hence, opting for 20 m presents a viable time-saving option. When comparing the curves of accuracy and time across altitude, it is evident that, while the time required decreased exponentially with increasing altitude, the accuracy declined in a more or less linear fashion. This particularly makes the higher flight altitudes attractive when time is strongly limited. In concrete terms, although 40 m pixel-based classification lags 10 percentage points behind the best classification at 10 m, it consumes less than 10 percent of the time for flight and processing.
4.4. Classificatory Challenges with Respect to Spatial Resolution
In this case study, the pixel-based approach was found to yield higher accuracies than the object-based approach throughout all the investigated altitudes. An analysis of the weeds’ F1 scores revealed that even the 80 m pixel-based classification was superior to the 20 m object-based classification in this regard. Looking at
Figure 6, the segmentation led to a coarser classification, which provided a less fragmented picture, but became increasingly blind for small weed patches. By comparing the pixel-based and object-based approaches, it was observed that the UAs for soybeans and weeds tended to be higher than the PAs, suggesting an underestimation of these classes. This type of misclassification frequently transpires in mixed-pixel transition areas. These mixed-pixel-induced misclassifications also occurred for soy and weeds, but classic classifier failures were observed here due to the spectral similarity of both species’ foliage. A specific problem of the object-based approach in this study, which might be decisive for the lower accuracies, was a limitation of the segmentation approach against the background of these spectral overlaps. This was particularly evident at higher altitudes with larger segments, resulting in superpixels that no longer represented a single class, but were composed of both soy and weeds. Accordingly, not only individual pixels, but whole segments were potentially misclassified. Similarly, sparse weeds such as grasses fell into soil segments. The lower the resolution, the more likely two classes are aggregated into one segment. Looking at the high-altitude segmentations makes this point obvious (
Figure 4). The spatial resolution is so low that the mixed-pixel fraction is relatively large. By further consolidating areas through segmentation, a precise semantic assignment is no longer possible. However, it is surprising that the accuracy of the object-based method was already significantly lower than the pixel-based classification at 10 m. Here, the issue of ambiguous segments should only have a minor impact on object-based classification.
Looking at other studies, this outperformance of pixel- against object-based classification appears less common. Gao and Mas [
48] showed in a study on the accuracy of pixel- and object-based classification that, at high resolutions, the object-based method achieves better results than the pixel-based classification. This relationship became reversed with a decreasing resolution. In other studies that have aimed to compare pixel- and object-based approaches, similar results were witnessed, with the latter regularly achieving superior classification results [
49,
50,
51]. Despite this, some studies can be found in which a pixel-based similarly outperformed an object-based method. Mattivi et al. [
52], for instance, achieved higher accuracies using a pixel-based artificial neural network than with an OBIA approach for weed mapping in maize.
It is conceivable that the quality of the segmentation could be enhanced by the use of alternate segmentation algorithms. However, comparative studies have found that the contrast among competing methods such as simple linear iterative clustering (SLIC), superpixels extracted via energy-driven sampling (SEEDS), linear spectral clustering (LSC), etc., is relatively small [
33]. Also, segment size could be a factor in need of improvement. Although the most advantageous segment sizes according to cross-validation were selected, a qualitative examination of the segments revealed that, in some cases, quite large areas were aggregated (
Figure 7). Possibly, better results could be achieved with smaller segment sizes oriented towards the actual objects, e.g., individual leaves for higher and single plants for coarser resolutions. Speculatively, the object-based approach could be more effective by using multispectral rather than RGB data because of augmented spectral discriminability, causing less aggregation of soy and weeds in a segment.
4.5. Limitations
Even though the results of this study give important insights into the relationship between spatial resolution and accuracy for soybean weed mapping, we want to stress that UAV-based weed mapping is a complex endeavor, facing numerous instability factors.
Firstly, this study covers a typical weed-infested soybean field with a certain weed spectrum. In other scenarios, however, different plant species, plant densities, and soil properties could be present, where spectral discriminability is less distinct. The accuracies can be expected to suffer if the classes cannot be clearly distinguished in a feature space. Secondly, this case study was carried out in June and at an early growth stage, as this is usually an appropriate time for applying weed control measures. No reliable statement can be made for other growth stages or seasons. Furthermore, the results could have been affected by deviating weather conditions or by the use of different equipment. For the stated reasons, the findings of this article should be interpreted as those of a case study rather than being universally applicable.
5. Conclusions
The influence of spatial resolution on classification accuracy strongly depends on the research object. With regards to weed mapping, there are no consistent results on the influence of spatial resolution so far. To further investigate this topic, this study examined how the classification accuracy of weeds in a soybean field changes as a function of spatial resolution. For this purpose, the resolutions corresponding to flight altitudes of 10, 20, 40, and 80 m were investigated, using both a pixel-based and an object-based approach. A clear relationship was found in that the accuracy decreased with a decreasing resolution. The best OA of 93 percent was achieved with the pixel-based classification approach at the spatial resolution corresponding to an altitude of 10 m. The greatest loss of accuracy in weed detection with respect to flight altitude was found for the pixel-based classification between 20 and 40 m, and for the object-based classification between 10 and 20 m. For all four resolutions studied, the pixel-based method outperformed the object-based approach, which is rather atypical compared to the findings of other studies.
While the accuracies declined approximately linearly with a decreasing resolution, the required flight and processing times decreased exponentially. This study showed that flight and processing times could be considerably reduced without losing statistically significant amounts of accuracy. While, for the 10 and 20 m pixel-based classification, the difference in the overall accuracy was not found to be statistically significant, the time savings of operating at 20 m compared to 10 m was about 75 percent in terms of the flight time and about 66 percent in terms of the processing time.
To further validate the findings made in this case study, further investigations with different soybean and weed species, plant densities, growth stages, weather conditions, and camera systems should be conducted.