IET Software
Publishing Collaboration
More info
IET logo
 Journal metrics
See full report
Acceptance rate-
Submission to final decision-
Acceptance to publication-
CiteScore3.800
Journal Citation Indicator0.350
Impact Factor1.6

Submit your research today

IET Software is now an open access journal, and articles will be immediately available to read and reuse upon publication.

Read our author guidelines

 Journal profile

IET Software publishes original research and review articles on all aspects of the software lifecycle, including design, development, implementation and maintenance.

 Editor spotlight

Chief Editor Hana Chockler is Lecturer in the Department of Informatics, King's College London, and an academic member of the Software Modelling and Applied Logic group. Her research interests include formal verification of hardware and software and sanity checks for model checking.

 Abstracting and Indexing

This journal's articles appear in a wide range of abstracting and indexing databases, and are covered by numerous other services that aid discovery and access. Find out more about where and how the content of this journal is available.

Latest Articles

More articles
Research Article

An Empirical Study on Downstream Dependency Package Groups in Software Packaging Ecosystems

The role of focal packages in packaging ecosystems is crucial for the development of the entire ecosystem, as they are the packages on which other packages depend. However, the evolution of dependency groups in packaging ecosystems has not been systematically investigated. In this study, we examine the downstream dependency package groups (DDGs) in three typical packaging ecosystems—Cargo for Rust, Comprehensive Perl Archive Network for Perl, and RubyGems for Ruby—to identify their features and evolution. We also identify and analyze a special type of DDG, the collaborative downstream dependency package group (CDDG), which requires shared contributors. Our findings show that the overall development of DDGs, particularly CDDGs, is consistent with the status of the whole ecosystem, and the size of DDGs and CDDGs follows a power law distribution. Furthermore, the interaction mechanisms between focal packages and downstream packages differ between ecosystems, but focal packages always play a leading role in the development of DDGs and CDDGs. Finally, we investigate predictive models for the development of CDDGs in the next stage based on their features, and our results show that random forest and Gradient Boosting Regression Tree achieve acceptable prediction accuracy. We provide the raw data and scripts used for our analysis at https://github.com/onion616/DDG.

Research Article

Exploiting DBSCAN and Combination Strategy to Prioritize the Test Suite in Regression Testing

Test case prioritization techniques improve the fault detection rate by adjusting the execution sequence of test cases. For static black-box test case prioritization techniques, existing methods generally improve the fault detection rate by increasing the early diversity of execution sequences based on string distance differences. However, such methods have a high time overhead and are less stable. This paper proposes a novel test case prioritization method (DC-TCP) based on density-based spatial clustering of applications with noise (DBSCAN) and combination policies. By introducing a combination strategy to model the inputs to generate a mapping model, the test inputs are mapped to consistent types to improve generality. The DBSCAN method is then used to refine the classification of test cases further, and finally, the Firefly search strategy is introduced to improve the effectiveness of sequence merging. Extensive experimental results demonstrate that the proposed DC-TCP method outperforms other methods in terms of the average percentage of faults detected and exhibits advantages in terms of time efficiency when compared to several existing static black-box sorting methods.

Research Article

An Expository Examination of Temporally Evolving Graph-Based Approaches for the Visual Investigation of Autonomous Driving

With the continuous advancement of autonomous driving technology, visual analysis techniques have emerged as a prominent research topic. The data generated by autonomous driving is large-scale and time-varying, yet more than existing visual analytics methods are required to deal with such complex data effectively. Time-varying diagrams can be used to model and visualize the dynamic relationships in various complex systems and can visually describe the data trends in autonomous driving systems. To this end, this paper introduces a time-varying graph-based method for visual analysis in autonomous driving. The proposed method employs a graph structure to represent the relative positional relationships between the target and obstacle interferences. By incorporating the time dimension, a time-varying graph model is constructed. The method explores the characteristic changes of nodes in the graph at different time instances, establishing feature expressions that differentiate target and obstacle motion patterns. The analysis demonstrates that the feature vector centrality in the time-varying graph effectively captures the distinctions in motion patterns between targets and obstacles. These features can be utilized for accurate target and obstacle recognition, achieving high recognition accuracy. To evaluate the proposed time-varying graph-based visual analytic autopilot method, a comparative study is conducted against traditional visual analytic methods such as the frame differencing method and advanced visual analytic methods like visual lidar odometry and mapping. Robustness, accuracy, and resource consumption experiments are performed using the publicly available KITTI dataset to analyze and compare the three methods. The experimental results show that the proposed time-varying graph-based method exhibits superior accuracy and robustness. This study offers valuable insights and solution ideas for developing deep integration between intelligent networked vehicles and intelligent transportation. It provides a reference for advancing intelligent transportation systems and their integration with autonomous driving technologies.

Research Article

Cross-Project Defect Prediction Using Transfer Learning with Long Short-Term Memory Networks

With the increasing number of software projects, within-project defect prediction (WPDP) has already been unable to meet the demand, and cross-project defect prediction (CPDP) is playing an increasingly significant role in the area of software engineering. The classic CPDP methods mainly concentrated on applying metric features to predict defects. However, these approaches failed to consider the rich semantic information, which usually contains the relationship between software defects and context. Since traditional methods are unable to exploit this characteristic, their performance is often unsatisfactory. In this paper, a transfer long short-term memory (TLSTM) network model is first proposed. Transfer semantic features are extracted by adding a transfer learning algorithm to the long short-term memory (LSTM) network. Then, the traditional metric features and semantic features are combined for CPDP. First, the abstract syntax trees (AST) are generated based on the source codes. Second, the AST node contents are converted into integer vectors as inputs to the TLSTM model. Then, the semantic features of the program can be extracted by TLSTM. On the other hand, transferable metric features are extracted by transfer component analysis (TCA). Finally, the semantic features and metric features are combined and input into the logical regression (LR) classifier for training. The presented TLSTM model performs better on the f-measure indicator than other machine and deep learning models, according to the outcomes of several open-source projects of the PROMISE repository. The TLSTM model built with a single feature achieves 0.7% and 2.1% improvement on Log4j-1.2 and Xalan-2.7, respectively. When using combined features to train the prediction model, we call this model a transfer long short-term memory for defect prediction (DPTLSTM). DPTLSTM achieves a 2.9% and 5% improvement on Synapse-1.2 and Xerces-1.4.4, respectively. Both prove the superiority of the proposed model on the CPDP task. This is because LSTM capture long-term dependencies in sequence data and extract features that contain source code structure and context information. It can be concluded that: (1) the TLSTM model has the advantage of preserving information, which can better retain the semantic features related to software defects; (2) compared with the CPDP model trained with traditional metric features, the performance of the model can validly enhance by combining semantic features and metric features.

Research Article

Design and Efficacy of a Data Lake Architecture for Multimodal Emotion Feature Extraction in Social Media

In the rapidly evolving landscape of social media, the demand for precise sentiment analysis (SA) on multimodal data has become increasingly pivotal. This paper introduces a sophisticated data lake architecture tailored for efficient multimodal emotion feature extraction, addressing the challenges posed by diverse data types. The proposed framework encompasses a robust storage solution and an innovative SA model, multilevel spatial attention fusion (MLSAF), adept at handling text and visual data concurrently. The data lake architecture comprises five layers, facilitating real-time and offline data collection, storage, processing, standardized interface services, and data mining analysis. The MLSAF model, integrated into the data lake architecture, utilizes a novel approach to SA. It employs a text-guided spatial attention mechanism, fusing textual and visual features to discern subtle emotional interplays. The model’s end-to-end learning approach and attention modules contribute to its efficacy in capturing nuanced sentiment expressions. Empirical evaluations on established multimodal sentiment datasets, MVSA-Single and MVSA-Multi, validate the proposed methodology’s effectiveness. Comparative analyses with state-of-the-art models showcase the superior performance of our approach, with an accuracy improvement of 6% on MVSA-Single and 1.6% on MVSA-Multi. This research significantly contributes to optimizing SA in social media data by offering a versatile and potent framework for data management and analysis. The integration of MLSAF with a scalable data lake architecture presents a strategic innovation poised to navigate the evolving complexities of social media data analytics.

Research Article

Unveiling the Dynamics of Extrinsic Motivations in Shaping Future Experts’ Contributions to Developer Q&A Communities

Developer question and answering communities rely on experts to provide helpful answers. However, these communities face a shortage of experts. To cultivate more experts, the community needs to quantify and analyze the rules of the influence of extrinsic motivations on the ongoing contributions of those developers who can become experts in the future (potential experts). Currently, there is a lack of potential expert-centred research on community incentives. To address this gap, we propose a motivational impact model with self-determination theory-based hypotheses to explore the impact of five extrinsic motivations (badge, status, learning, reputation, and reciprocity) for potential experts. We develop a status-based timeline partitioning method to count information on the sustained contributions of potential experts from Stack Overflow data and propose a multifactor assessment model to examine the motivational impact model to determine the relationship between potential experts’ extrinsic motivations and sustained contributions. Our results show that (i) badge and reciprocity promote the continuous contributions of potential experts while reputation and status reduce their contributions; (ii) status significantly affects the impact of reciprocity on potential experts’ contributions; (iii) the difference in the influence of extrinsic motivations on potential experts and active developers lies in the influence of reputation, learning, and status and its moderating effect. Based on these findings, we recommend that community managers identify potential experts early and optimize reputation and status incentives to incubate more experts.

IET Software
Publishing Collaboration
More info
IET logo
 Journal metrics
See full report
Acceptance rate-
Submission to final decision-
Acceptance to publication-
CiteScore3.800
Journal Citation Indicator0.350
Impact Factor1.6
 Submit Evaluate your manuscript with the free Manuscript Language Checker

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.