Results of the case studies show that the selection of construction methods is largely based on the previous experience of professionals. It is a process characterized by the complexity of the analysis, the high dependence on individual experience and teamwork, and the need for expert knowledge. Companies' senior management recognizes the need for a structured system to allow a better management of their knowledge by storing it correctly and making its employment less difficult. In addition, knowledge acquisition is not part of an appropriate process, so people have no obligations or incentives to participate in this activity. This situation was highlighted as one of the main barriers for organizational learning about construction methods.
An important part of this research focused on the identification of knowledge gaps in the process of selecting construction methods. The case studies reveal that main gaps exist in the activities “ search for construction methods ” and “ application of the decision criteria. ” Regarding the first one, interviewees indicated that people have an extremely limited time for this activity. Furthermore, individual's knowledge is a fundamental input of this activity considering that there is not a database of stored lessons learned, nor are there procedures for their effective management. Related to the application of the decision/selection criteria, a critical activity for the adequate project performance, it currently depends heavily on the decision maker intuition, and then, decisions are not comparable across projects. Thus, it becomes necessary to reduce the subjectivity and variability of the decision-making process by making it explicit about the most influential decision criteria for selecting construction methods. Results from interviews allowed identifying the key criteria to use in the selection of construction methods, which include project duration, cost, product characteristics, construction method characteristics, and environmental characteristics. The criterion “product characteristics” has two associated subcriteria: build volume and quality requirements, while the criterion “characteristics of the construction method” has five associated subcriteria: familiarity with the construction method, health and safety, level of automation of the method, level of interference with other operations, and availability of the method. Finally, the criterion “environmental characteristics” has four subcriteria associated: location and access, climate, obstacles/topography, and available space. These criteria were validated with experts of the studied companies and used for the development of the knowledge system for the selection of construction methods.
Based on the results of the case studies, the proposed approach for the knowledge system incorporates both knowledge management techniques and technologies. Knowledge management techniques are applicable since there is already a valuation of collaboration and team work in construction companies. The consistent application of these techniques should encourage the creation and transmission of the knowledge associated with the selection of construction methods. For this process, different techniques might be used as follows:
Regarding knowledge management technologies, the information and knowledge gained were stored in organizational databases associated with a knowledge portal called Construction Methods Knowledge System (SCMC in its Spanish acronym). This knowledge portal on a web platform provides easy access from any location and has the capacity of storing in databases all information associated with construction methods. Furthermore, a decision-making support system for the selection of construction methods was accessible from this portal.
The information was stored in the form of construction methods sheets. Each sheet ( Table 2 ) contains the knowledge linked to the selection of construction methods as identified in the case studies. Thus, each sheet focuses not only on the technical aspects of each method, but also on two issues that are of importance in the development of the process: (1) the selection of subcontracts and (2) the search for experts, whether internal or external. All this facilitates the study of each construction method as this information will be stored in one place—the organizational database—saving time and effort in the search.
Construction methods sheet.
Construction methods sheet | |||
---|---|---|---|
Code: | Related to other sheets: | ||
Space | Competencies | ||
Inspections/permits | Weather | ||
Topographic | Machinery | ||
Materials | Security | ||
Different aspects were considered to design the final construction method sheet. First, apply the same construction method to different projects. Second, store information in the construction methods sheet using a unified format. Third, the sheet should be simple and easy to fill. Fourth, allow an overview of the construction method, and include a list of experts who may be contacted in case more detail is required. Fifth, indicate if lessons learned about the method exist and if so, show them and allow their download. Sixth, indicate the degree of automation [ 10 ], risk level and degree of interference with other operations, and features that might be measured using a scale 1–5, where 1 indicates the lowest automation value for the analyzed item and 5 represents the highest value.
For the decision-making support system, the knowledge related to decision criteria was acquired in meetings with experts on construction methods selection, realized during the case studies as previously indicated.
3.1.1. system requirements.
For the design and development of a system, it is necessary to know its requirements, which can be of two types: functional and nonfunctional. Functional requirements are inputs, outputs, processes, and data stored needed to satisfy the system improvement objective, while nonfunctional issues are a description of other features, characteristics, and constraints that define a satisfactory system [ 49 ]. Main functional requirements of the system have to do with its ability to store and deploy construction methods sheets, allow finding these sheets within the database, and edit and delete them as necessary. They also highlight the need for the system to upload and download files and to be accessible via the Internet. Main nonfunctional requirements include the need for different types of users, the possibility to upload files in Word or pdf format, the option to export construction methods sheets to MS Excel, and the need to view the system properly using common browsers.
Based on the requirements defined for the system, the computer applications that compose the SCMC were selected. This study began with the search of computer programs available in the market for each of the two principal components of the SCMC: (1) the knowledge portal and (2) the system to support decision making. This task was carried out to determine if appropriate software was available in the market in order to reduce the programming work or to start all programming from scratch if needed.
Regarding the knowledge portal, there was a wide variety of software available in the market, including Alfresco, TikiWiki, and MS Office SharePoint, to name just a few. The evaluation of these software packages considered the analysis of various factors, such as system applications number and type, the allowance of modification of their programming code, and their cost. Finally, the best option was to design and construct the system from scratch so that needs of construction companies would be met. For the decision support system, developing software was the least suitable alternative, because commercial software such as Expert Choice and Make it Rational offered what was exactly needed for this part of the prototype. The online system Make it Rational was selected for this purpose, because it is easy to use and allows access through the web. This software uses the Analytic Hierarchy Process (AHP), one of the most widely applied multiattribute decision making methods [ 50 ]. The basic idea of this method is to convert subjective assessments of relative importance to a set of overall scores or weights [ 50 ]. AHP uses quantitative comparisons to select the preferred alternative by comparing alternatives in pairs, based on their relative performance with respect to a criterion.
The first step for modeling the system corresponded to developing use cases. Figure 1 shows an example of the use-case diagram developed specifically for the SCMC user management. The system involves three types of users: User Manager, Sheets Manager, and Consultant. Users link to graphical representations of cases of use, indicating what their different roles within the system are. To develop the system, thirteen use cases were built. Each one was developed in greater detail in order to define the requirements of the prototype clearly.
User management in SCMC use-case diagram.
The database system works in Microsoft SQL Server, which is a database management system based on the relational model. When defining the system architecture, the software architecture pattern Model-View-Controller (MVC) was used as shown in Figure 2 . This pattern allows separating the data from an application, the user interface, and the business logic into three distinct components [ 51 ]. For example, for the prototype system, the domain model recognizes two main entities that shape the SCMC, sheets and users, which also distinguishes two types of interaction on the entities: user management and sheets management. In this case, the sheets are the description of a construction method used in a given context, while users correspond to the representation of individuals who access the system and can manage records according to their role. The views, meanwhile, are in charge of showing the user the information contained in the model, presenting it in a form suitable for interaction [ 51 ]. Usually this is the user interface. The controller is responsible for directing the control flow of the application due to external messages, such as data entered by the user or menu items selected by him [ 52 ]. From these messages, the controller is responsible for the modification of the model or opening and closing views [ 52 ].
System architecture.
The design of the system's graphic interfaces was made with a free HTML template selected for this purpose (colors, organization of content, fonts, etc.) and located inside the MVC application.
Access to the SCMC is through the Internet, with a login and password. After the authentication, the user accesses the system according to his role: User Manager, Sheets Manager, or Consultant. To illustrate how the system works, look at the case of a Project Manager that has a Sheets Manager role. If this user wants to enter information about a new construction method, he or she will face a view as presented in Figure 3 . There, he/she must enter at least the mandatory data to create a new sheet: method's name, discipline, operation type, risk level, yield, cost, core activities, and whether the method has been used previously in the company. Once the Sheets Manager saves the new file, the system shows the new sheet with options to edit, delete, export, or view previous versions of the sheet (see Figure 4 ).
Sheet creation for a new method.
Completed method sheet.
The search for construction methods sheets can be performed in three ways ( Figure 5 ): (1) by using a quick search feature, which allows searching by keywords; (2) looking into a catalog of methods, which allows searching by the initial letter of the name of the method; or (3) through an advanced search, which allows searching using filters such as the method name, discipline to which it belongs, and operation type. In this case, the system searches all the sheets on the database by means of the fields defined by the user and present the results that match the search parameters.
Catalog of construction methods and sheets search.
Results appear as in Figure 5 . Once the user receives the results provided by the system, he or she can access the full version of the sheets that he or she wants to review in more detail. After this revision, he or she can define the feasible alternatives to perform the operation under consideration. With this information, the user can request quotes, conduct a cost-benefit analysis, and select the two or three most feasible options considered for the project. To carry out the final selection of the construction method it is necessary to evaluate these alternatives in terms of different decision criteria. In order to make this part of the process more objective, the SCMC includes among its core components a system to support decision-making. The link for accessing this system is on the right side of the screen ( Figure 4 ).
When accessing the application, a file named “ Selection of construction methods ” should open, which contains the hierarchical structure of decision criteria obtained from the case studies. When the file opens, the user faces a set of windows: (a) ALTERNATIVES, (b) CRITERIA, (c) EVALUATION, (d) RESULTS, and (e) REPORT. The first window allows defining the alternatives for the decision process. With this information the user enters the CRITERIA window, which contains the decision criteria previously defined and the description of each one. The third window allows the input of the user's preferences as described in the AHP methodology. For this, three kinds of comparisons are necessary. First, for each subcriterion or criteria without subdivision, different alternatives are compared by pairs ( Figure 6 ) and preferences of the user are requested. The user enters his/her preference by marking the triangle containing the number that better represents it, ranging from 1 to 9. Also, for each criterion with division, the user must assess the importance of each subcriterion with respect to the central criterion, also in pairs. Finally, main criteria should be compared among them with respect to the ultimate goal, which is the selection of a construction method.
Input of user's preferences using Make it Rational.
Once all comparisons are made, it is possible to access the “RESULTS” window. The system indicates what alternative is the best in terms of the user's preferences. For example, Figure 7 shows a bar graph with the ranking of alternatives. This graph shows the utility of each alternative for the decision maker. Finally, if the user wishes, in the REPORT window, a report with the results can be automatically generated and then exported to RTF, PDF, Excel, HTML, and XPS format.
View of results in Make it Rational.
The decision making support system allows the decision maker to select the most optimal construction method to perform the operation studied objectively, having analyzed all the criteria that could affect this decision, which decreases subjectivity and organizes perceptions and judgments. This analysis can also increase the likelihood of success in implementing the construction method in the field and force a detailed analysis of all factors that affect the decision, which directly impacts the performance of the project.
During the development of the system, the SCMC was presented three times to two experts on construction methods selection, each from a different construction company. The comments received were used to modify some aesthetic aspects of the system and improve its interaction with the user. A final validation of the construction methods selection system was carried out with a wider group of experts. The goal of this activity was to verify the usefulness of the system and its practical applicability, even if it has not been used yet in the field.
The validation process included interviews with eleven construction professionals from six different companies, with experience in construction methods' selection. Two of these professionals had participated in progress meetings of the SCMC; five had participated in the case studies without involvement in the development of the SCMC and four were integrated at this final stage. These professionals work in the following roles: Technical Manager, Head of the Estimation Department, Project Manager, and Head of Management and Innovation. All of them received a complete presentation of the SCMC. After it, each professional was interviewed briefly in order to know his/her opinions about the prototype.
Interviewees considered the system as a useful tool for the selection of construction methods because it helps to make more informed decisions and provides all the information needed in just one place. Furthermore, with the same level of importance, people said that the system is a valuable mean to increase organizational knowledge, reducing dependence on individual knowledge. Also, the system was considered as a suitable tool for sharing that knowledge within the organization. These results are presented in Table 3 .
Utility of the SCMC.
Utility | Frequency |
---|---|
Make decisions with more knowledge | 7 |
Have all the information in just one place | 6 |
Increase organizational knowledge | 2 |
Share knowledge | 2 |
Closely connected to the benefits from the adoption of the system in construction companies, respondents highlighted the time savings in the search for alternative construction methods and the possibility to store, organize, and classify information regarding these issues. Interviewees also indicated that the system could enhance the competitiveness of the company and that it is a reliable guide for the decision-making process, decreasing the likelihood of making a wrong decision. Likewise, they indicated that this system would help them to develop a knowledge oriented culture in the organization. These results are presented in Table 4 .
Major benefits of adoption of SCMC in a construction company.
Benefits | Frequency |
---|---|
Time savings in the search for alternative construction methods | 6 |
Possibility to store, organize, and classify companies' information | 5 |
Increase competitiveness of the company | 4 |
Decrease the likelihood of making a wrong decision | 2 |
Development of a knowledge oriented culture in the organization | 1 |
Ten of the eleven respondents would use the prototype in their daily work and considered it friendly and easy to use. In these cases, what stood out as its main practical value was the increase of their productivity by saving time in searching for alternative methods of construction and the easy access to information. Furthermore, interviewees indicated that this system would help their companies in guiding their decision process for construction methods selection, introducing innovations within the organization and reducing costs and time in projects. Only one interviewee indicated that he would not use the system in his daily work. He explained that projects carried out by his company (mainly high-rise residential buildings) are quite similar between them. Then, in this case there would be no need to select different methods. Also, alternatives methods for the construction of these projects would be very limited. In fact, the interviewee noted that a system with the same characteristics, but much more focused on technical information sheets, would be much more attractive for his company. These results are presented in Table 5 .
Practical values of SCMC.
Practical values | Frequency |
---|---|
Increase productivity | 5 |
Easy access to information | 5 |
Guide decision process | 4 |
Introduce innovations within the organization | 2 |
Reduce cost | 1 |
During the interviews, some stimulating comments emerged in relation to the future implementation of the system. First, there is a concern about how to integrate such a system into the organization. It is believed that young professionals would be more willing to use the system because they are more used to work with computers and software, unlike older professionals. Under this logic, it is logic to think that there would be more resistance to the implementation of the system in more experienced professionals. Furthermore, regarding the difficulties of integrating the system into large companies, it becomes clear that it needs to be part of the policies and long-term objectives of the company, in order to promote its development.
In many cases, there were concerns about the way in which the system would be incorporated in projects and how to ensure that the necessary information is entered. The option of integrating the knowledge management system with the quality system of the company was considered appropriate and useful, given the potential synergy between the two. Other comments mentioned two additional key aspects: the organizational culture and workers' competencies. In order to integrate a real-knowledge management system in an organization, it is vital to develop a culture of knowledge in the enterprise, to recognize the value of sharing experiences, document them and make use of the knowledge of the organization to facilitate everyday tasks. Moreover, even when people intend to participate actively in a knowledge management system, they may not have all the necessary competencies, especially regarding information technologies. In the same way, if workers who will execute a selected method chosen in SCMC do not have the technical skills to carry it out, no matter how meritorious the decision making was, the result will not be as expected. These aspects should be analyzed in more detail by each organization, to determine how to close the gaps that exist today.
The research found that empirical experience of construction field practitioners is the best source of knowledge for the selection of construction methods. It is highly likely that this situation is repeated in other similar processes. Therefore, people should be careful not to incorporate knowledge systems that merely use information technology for managing knowledge since they only encode explicit knowledge, ignoring the experience-rich tacit knowledge that it is difficult to transfer through information technologies. To avoid this, it is necessary to include appropriate knowledge management techniques. Moreover, when incorporating information technology to a construction company, it should be friendly, intuitive, and simple to use, since otherwise it will not be used.
Also, the most appropriate occasions to acquire knowledge seem to be working meetings, and the mechanisms used to acquire knowledge could be construction methods' sheets and documented lessons learned, as they can capture the knowledge and also part of the context in which it was generated. Construction methods' sheets are a way to standardize the knowledge on construction methods, facilitating in this way the decision-making process. This knowledge is stored in the system database, transforming the individual experience of professionals in organizational knowledge. Records stored in the system will enhance the performance of searching construction methods saving time and effort.
Since the definition of a construction method is a complex process, the best way to organize the knowledge associated with this process is by developing a hierarchy of decision criteria which later serves as the basis for the application of a methodology of decision making with multiple criteria within the knowledge management system. Thus, every option is evaluated based on preestablished criteria, where the decision maker incorporates and evaluates the main requirements of the project, through his or her analysis of preferences.
Opinions given by respondents during the SCMC prototype's validation point out that the prototype could respond appropriately to the needs of construction companies regarding the information and knowledge stored, the contribution to the decision-making process, and its simplicity of use. These features make the system valuable and applicable in the day to day activities of a construction company.
The system could become a tool for supporting the selection of construction methods and improving the quality of these decisions. Beside this, the application of the system will reduce the impact generated by the departure of key employees from the company because their knowledge will be stored and available. In addition, the prototype showed that the proposed knowledge management system offers a concrete way to capture and use the knowledge to improve the selection of construction methods.
You have full access to this open access article
46 Accesses
Explore all metrics
Scars that form after skin injury can cause structural and functional skin damage. Currently, scar tissue determination relies mainly on doctors’ subjective observations and judgments and lacks objectivity. However, current deep learning models can only achieve specific discrimination using unimodal data, which limits the comprehensive understanding of scar tissue and may reduce accuracy and stability. To solve these problems, in this study, a skin scar recognition platform based on advanced deep learning and a weighted aggregation network fusion method is proposed. It is implemented using a residual network-based CNN model and a logistic regression model with L1 regularization and is suitable for both unimodal and multimodal data. The experimental results showed that the proposed platform achieved a satisfactory accuracy of 98.26% for image discrimination. In the gene discrimination model test performed on a test dataset containing 17 gene expression samples, all samples were accurately discriminated. In addition, the proposed multimodal discrimination model achieved a discrimination accuracy of 98.23%. These results validate the effectiveness of deep feature extraction and multimodal feature fusion techniques for image discrimination tasks. On this basis, to deeply explore the pathogenesis of scar formation, a method with the ability to integrate regularization, sparsity, and orthogonality constraints, multiconstraint joint non-negative matrix factorization (MCJNMF), was used to explore the genetic correlation between collagen micrographic image features and gene expression data. In this study, we confirmed the association between the calcium signaling pathway, MAPK signaling pathway, and collagen fiber repair, and successfully identified 11 potential therapeutic targets, including TRIM59 and TBC1D9, which provide important clues for future scar treatment and prevention strategies.
Explore related subjects.
Avoid common mistakes on your manuscript.
Skin scarring is a physiological response to skin injury accompanied by a three-stage healing process: inflammation, new tissue formation, and extracellular matrix reconstruction [ 1 , 2 ]. Collagen fibers play a key role in the restoration of the skin structure. The assessment of collagen fiber morphology and structure is crucial for differentiating scars from normal tissue, and this assessment often relies on microscopic images of collagen fibers stained with Sirius Red. DNA microarray technology has revealed the expression of thousands of genes in scar tissues [ 3 , 4 ]. Gene expression analysis is important for identifying scar tissue. By comparing the gene expression patterns of scar tissues with those of normal tissues, reliable differentiation markers can be identified, which can help establish an accurate classification model and is expected to provide theoretical support for preventing or alleviating scar formation. Furthermore, the use of association analysis algorithms to explore the association between collagen fiber micrographs and gene expression is critical when studying the mechanisms of scar formation. This imaging genetic approach can provide insights into the process of scar formation, reveal its underlying mechanisms, and help identify potential therapeutic targets.
For research in the area of disease classification using collagen fiber micrograph to quantify the anisotropy of collagen fibers in scar tissue, Fomovsky et al. developed the Matfiber algorithm [ 5 ], which measures the orientation of collagen fiber structures in a finite subregion of an image using an intensity gradient detection algorithm. This method can extract the specific physical features of collagen fibers and can be used as a basis for scar tissue determination. However, feature extraction and discrimination based on machine-learning methods have significant limitations, and very few features are extracted. Collagen fibers have a rich hierarchy of textural features that require a greater depth of feature extraction. Pham et al. first proposed the use of deep learning techniques to quantify and characterize collagen fiber features [ 6 ]. Their study introduced a Universal CNN (UCNN) based on the VGG-16 implementation, which can be used for the burn scar tissue image classification and detailed characterization of collagen fiber tissues, with an accuracy of 97% in scar discrimination. However, VGG-16 lacks a deep network structure, does not extract collagen fiber texture features well, and may have some limitations in terms of parameter efficiency and the handling of large amounts of data. Razia et al. proposed a lightweight deep convolutional neural network model [ 7 ], S-MobileNet, and exploited model fine-tuning using Relu and Mish activation functions, with a model discrimination accuracy of 98%. Hekler et al. used a deep learning approach to train a single CNN and combined two independently determined diagnoses into a new classifier based on gradient enhancement techniques [ 8 ], which ultimately led to the classification of five classes of skin lesions. The algorithm uses an end-to-end learning approach and can learn the features directly from raw data, which simplifies the process and improves efficiency, achieving a classification accuracy of 82%. However, the model complexity may lead to overfitting problems, and the dependence of gradient enhancement techniques on data distribution and feature selection must be handled with care.
For research in the area of disease classification using gene expression data, Hilal et al. proposed a novel feature subset selection and optimal adaptive neuro-fuzzy inference system (FSS-OANFIS) [ 9 ], which uses an improved grey wolf optimizer-based feature selection (IGWO-FS) model to derive the optimal feature subset, and the OANFIS model was used for gene classification with a discrimination accuracy of 89.47% on the colon cancer dataset. Because microarray data usually contain a large number of genes and a small number of samples, regularization is often used to effectively select information-rich genes to improve discriminatory accuracy. Lavanya et al. demonstrated that coefficient logistic regression with L1/2 regularization yields a higher classification accuracy and is an effective technique for gene selection in practical classification problems [ 10 ]. Based on this, Alharthi et al. proposed an adaptive penalized logistic regression (APLR) method, a regularization technique that achieved the highest discrimination accuracy of 93.53% in a prostate gene expression dataset, which was implemented using the least absolute contraction and selection operator. Elbashir et al. employed a constructed lightweight CNN model to classify breast cancer by converting gene expression data into a 2D heat map matrix [ 11 ]. Their results showed that this method achieved a discrimination accuracy of 98.76% and an area under curve (AUC) value of 0.99. Despite the significant advantages of this method in improving the accuracy, the general applicability of the method on different datasets is low. This may be due to the specificity of the dataset and the limitations of the heat map matrix transformation.
For research in the area of disease classification using multimodal data, considering the problem of insufficient feature representation of unimodal data, Ghoniem et al. established a hybrid evolutionary deep learning model using multimodal data, and the established multimodal fusion framework fused the genetic and histopathological image modalities. Based on the features of different modal data, they established a deep feature extraction network [ 12 ]. The constructed model achieved 98% accuracy in ovarian cancer staging prediction. Cai et al. proposed a staged multimodal multiscale attention model that extracts image and gene features by training feature extractors of different modalities and sends the multimodal features together to the feature fusion module for multimodal feature fusion to achieve classification judgment [ 13 ]. This idea of training different feature extraction networks can realize the effective extraction of multimodal data features and achieve a staging prediction accuracy of 88.51% on the TCGA lung cancer dataset.
For research in the area of imaging genetics analysis, Wang et al. proposed a multi-constrained uncertainty-aware adaptive sparse multi-view canonical correlation analysis (MC-unAdaSMCCA) method to explore the associations between SNPs, gene expression data, and sMRI by applying orthogonal constraints to multimodal data via linear programming [ 14 ]. Deng et al. proposed a multi-constrained joint non-negative matrix factorization (MCJNMF) method for correlation analysis of genomic and image data [ 15 ]. This method projects these two data matrices onto a common feature space, thereby enabling heterogeneous variables with large coefficients in the same projection direction to form a common module. This approach effectively identified common disease-related modules. However, to the best of our knowledge, no researchers has utilized the MCJNMF algorithm for association and bioinformatic analyses of scarring. In this study, association analysis was expected to provide a deeper and more precise understanding of the mechanism of scar formation, providing important insights and new ideas for scar treatment and prevention.
Currently, no unified platform has been established for scar tissue discrimination, either in the field of unimodal discrimination or multimodal fusion discrimination. Therefore, it can be adapted to the needs of scar discrimination under different input conditions. In addition, most current studies on both unimodal and multimodal fusion discrimination are limited to the technical application level and fail to explore the mechanism of scar formation from the perspective of bioinformatics. Based on the above problems, this study designed a multi-functional scar tissue discrimination platform that can perform both unimodal discrimination of histopathological images or gene expression data and the fusion of two modalities of data to achieve multimodal scar tissue discrimination. In unimodal discrimination, a CNN model based on residual networks is proposed to discriminate collagen fiber micrographs. The convolutional block based on residual network structure has advantages in image feature extraction and discrimination. This network structure can capture the textural features of collagen fibers more finely and solve the problem of gradient vanishing in deep learning. In addition, a logistic regression model with L1 regularization was designed to extract important gene features, which were then fed into a sigmoid classifier for binary discrimination. In multimodal discrimination, trained image feature extraction networks and gene feature extraction networks are used for unimodal feature extraction, and the multimodal features are fused by weighted average linear aggregation and then fed into the sigmoid classifier for final classification. In addition, a multimodal imaging genetics correlation analysis algorithm was performed on scar tissue images and gene expression data to gain insight into the causes of scar formation and identify potential targets for scar treatment. The contributions of this study are as follows:
Accurate discrimination of histopathological images and gene expression data of scar tissue using residual-network-based CNN model and L1 regularized logistic regression models.
A feature extraction network was constructed for different modal data to achieve effective extraction of features from different modal data, and a feature fusion module was designed to fuse multimodal features to improve the objectivity of scar tissue discrimination.
Using the MCJNMF algorithm to correlate collagen fiber features and gene expression, we mined potential pathological mechanisms of scar tissue formation and identified possible therapeutic targets for scarring.
2.1 workflow of this study.
The research content of this study was divided into three tasks, as shown in Fig. 1 : Task1 is the design and implementation of the unimodal discriminative model, Task2 is the design and implementation of a multimodal discriminative model, and Task3 is the investigation of the biological mechanism of scar tissue formation. These three tasks are described as follows.
In Task 1, for the modal discrimination of collagen fiber micrographs, the images were input into the proposed CNN model for collagen fiber feature extraction. After the fully connected layer, the extracted features were expanded into one-dimensional features, which were then connected to a Sigmoid classifier to achieve unimodal discrimination of the collagen fiber micrographs. For gene expression modal discrimination, the L1 regularized logistic regression model was used to extract gene features, and a Sigmoid classifier was connected to the model to obtain the final discrimination results. In Task 2, in the feature extraction layer, the image and gene discrimination models trained in Task 1 were used as the feature extraction network for the image and gene modalities. In the feature fusion layer, based on the image and gene features extracted by the feature extraction network, a linear weighting network is used to fuse the features of the two modalities. Finally, the fused features were input into the Sigmoid classifier to achieve multimodal discrimination. In Task 3, to explore the causes of scar tissue formation more deeply, we performed a bioinformatics analysis of the scar tissue at the macroscopic and image genetics levels. For the macroscopic characterization of collagen fibers, scar tissue images, and normal tissue images were input into the image discrimination model. The image of the 32nd channel of conv1 was extracted, and the density and anisotropy parameters of the collagen fibers were extracted using the Matfiber algorithm. Density and alignment characterization of collagen fibers of scar tissue and normal tissue were performed, and the differences in density and alignment between the collagen fibers of scar tissue and normal tissue were analyzed. In the image genetics level analysis, the extracted collagen fiber features and gene features were correlated using the MCJNMF algorithm to obtain the co-expression module. The genes in the co-expression module were taken and intersected with the differential genes of scar tissue and normal tissue to obtain the intersecting genes related to the formation of collagen fibers in scar tissue, and then the intersecting genes were enriched and analyzed to explore the pathogenesis related to the formation of collagen fibers in scar tissue. In addition, receiver operating characteristic (ROC) curves of the intersected genes were plotted to obtain abnormally expressed genes with specific correspondences to scar formation and biological mechanisms to identify potential targets for disease treatment.
Workflow of multi-functional discriminatory platform and bioinformatics analysis of scar tissue at macro- and micro-levels
Figure 1 (a)–(c) shows a block diagram of the CNN model used for the discrimination of collagen fiber micro-images. First, each image (training and test sets) was resized to the input size of the model (224 × 224 pixels) using the resize method in transforms, and the images were normalized using the normalization method such that the distribution of the pixel values in each channel was close to the zero-mean and unit variance. The proposed CNN model (Fig. 1 (a)) uses the structure and weights of Stage1-Stage2 of ResNet-50 pre-trained on ImageNet and freezes the parameters. After the pre-training block, four cascaded trainable convolutional layers (out_channels = 256, kernel_size = 3, stride = 1, and padding = 1) were added, and the parameters of the convolutional layers were initialized using the Kaiming uniform initializer. The first of these trainable convolutional layers was used for channel number shrinkage (the number of channels was reduced from 512 to 256) to reduce the model complexity. In addition, three cascaded convolutional layers are introduced to increase the nonlinear representation capability of the network, improve the sensory field, and extract high-level features of the image. Feature activation is then achieved using the ReLU activation function, followed by input to the global average pooling (GAP) layer for dimensionality reduction of the feature maps. This improves the computational efficiency and generalization ability of the model while simultaneously enhancing the model’s translation invariance to the image for better adaptation to the image classification task. After the global average pooling layer, a flattening layer is added to perform the spreading operation on the obtained feature maps, and the obtained one-dimensional feature vectors are input into the fully connected layer. A Sigmoid classifier was used in the last layer to classify normal and scar tissues (Fig. 1 (b)), and its output scores were in the range [0,1] (Fig. 1 (c)). The pseudocode implemented in this model is provided in Online Resource 1. Note that we optimized the learning rate and training batch size of the model using a grid search algorithm and a cross-validation method to obtain the optimal hyperparameter configuration.
Figure 1 (d)-(f) shows a block diagram of the unimodal discriminative model for gene expression data. In this block diagram, we use the L1 regularized logistic regression model for gene modality discrimination(Fig. 1 (d)). First, the data were preprocessed; that is, the gene expression values were normalized to ensure that each gene feature contributed equally to the training process of the model. A logistic regression model was chosen to implement the binary classification task, and L1 regularization was applied to the training set. The strength of the L1 regularization was controlled by a specified parameter (λ), and we used the LogisticRegression method in the sklearn library to achieve this. L1 regularization is a penalty term attached to the loss function, which penalizes the model’s performance by adding the sum of absolute values of the parameters to the loss function to penalize the complexity of the model and minimize the sum of the loss function and the regularization term, thus reducing model overfitting and inducing sparsity in the model parameters. Therefore, the objective function can be expressed as follows:
The loss function \(J\left(w\right)\) consists of a cross-loss term \(-\frac{1}{m}{\sum }_{i=1}^{m}\left[{y}^{\left(i\right)}\text{log}\left({h}_{w}\left({x}^{\left(i\right)}\right)\right)+\left(1-{y}^{\left(i\right)}\right)\text{log}\left(1-{h}_{w}\left({x}^{\left(i\right)}\right)\right)\right]\) and an L1 regularization term \(\lambda {\parallel w \parallel}_{1}\) , where \(w\) is the parameter vector of the model, \(m\) is the number of samples, \({y}^{\left(i\right)}\) is the true label of the ith sample, \({h}_{w}\left({x}^{\left(i\right)}\right)\) is the predicted value of the model for the Ith sample, \(\lambda\) is the regularization parameter, which is used to control the strength of the regularization, and \({\parallel w\parallel}_{1}\) denotes the number of L1-paradigms of the \(w\) of the parameter vector, which represents the sum of the absolute values of parameters. The ultimate goal of model training is to minimize the sum of the loss function and regularization terms to obtain a model that performs well on the training data and has fewer parameters.
Finally, a Sigmoid classifier was accessed after the L1 regularized logistic regression model to classify the gene expression data of normal and scar tissues (Fig. 1 (e)), and its output scores were in the range of [0,1] (Fig. 1 (f)). The pseudocode implemented in this model is provided in Online Resource 2. Note that we optimized the parameters of the LogisticRegression function, including the solver and regularization coefficients, using a lattice search algorithm to obtain the best hyperparameter configuration.
Figure 1 (g)–(j) shows block diagrams of the multimodal discriminative models for collagen fiber microimages and gene expression data. The image modal discriminant model and gene modal discriminant model trained in Task1 were used as image feature extractor and gene feature extractor, respectively. First, we loaded the image discriminative model using PyTorch and set it to evaluation mode, which was performed to utilize only the forward propagation process of the model to extract high-level feature representations of the input image. Simultaneously, we loaded the gene discrimination model using the joblib library and called the weights of this model to extract the corresponding important gene features (Fig. 1 (g)). After acquiring the image and gene features, a weighted average linear aggregation network was used to fuse the two modal data sets to obtain the fused features (Fig. 1 (h)). The specific realization process is shown in Fig. 2 . The basic principle of weighted average linear fusion is to weigh and average the outputs of multiple features or models, where the weight of each feature or model is determined using methods such as a priori knowledge, experience, or cross-validation. Typically, weights depend on the performance and contribution of each feature or model, and features or models with better performances may be assigned higher weights. In this experiment, the average weights of the image and gene features were obtained by evaluating the performance metrics (F1 scores) of the two feature extraction networks in the validation set and normalizing them. The average weighted linear fusion result \({F}_{ensemble}\) can be expressed as follows:
where \({S}_{i}\) is the performance metric of each feature-extraction network, \(N\) is the total number of feature extraction networks, \(\frac{{S}_{i}}{{\sum }_{j=1}^{N}{S}_{j}}\) is the corresponding weight of each feature extraction network, and \({F}_{i}\) is the output of each feature extraction network. The pseudocode implemented in this model is provided in Online Resource 3.
The advantage of this approach is the automated determination of weights based on performance, which allows better-performing features or models to influence the final fusion results, thus improving the overall model performance.
Feature fusion layer framework
In this study, we used the MCJNMF algorithm to model the associations between macro- and micro-level data. This approach integrates both genomic and image data and helps identify common modules associated with diseases.
For the collagen fiber micrographs, 29 texture features were extracted from the images using the MatFiber and Haralick algorithms. Within the context of our investigation, we address two distinct data matrices: \({X}_{1}\) , which represents the feature matrix derived from the microscopic image, and \({X}_{2}\) , which represents the gene expression matrix. To reveal the shared underlying patterns within both datasets, we utilized a framework that decomposes the original matrices into a common base matrix, denoted as \(W\) . This process is accompanied by distinct coefficient matrices, namely \({H}_{I}(I=\text{1,2})\) , which are associated with each dataset [ 16 ]:
The absolute values of the Pearson correlation coefficients between the image features and the gene expression matrix data were then computed, and the matrix of correlation coefficients was defined as the a priori knowledge matrix \(A\) , which can be encoded by the following objective functions:
where \({a}_{ij}\) refers to an element of the adjacency matrix, and the value of \({a}_{ij}\) refers to the degree of relevance.
Alternatively, using linear programming, orthogonal constraints are added to \(H\) , whose objective function can be defined as follows:
Where the parameter \(\lambda\) is the weight for the must-link constraint defined in \(A\) . \({\gamma }_{1}\) is used to limit the growth of \({W}_{ }\) and \({\gamma }_{2}\) is used to constrain \(H\) .
The pseudocode implemented in this arithmetic is provided in Online Resource 4.
The model was trained and tested using an Intel ® Core™ i9-13900 K CPU @ 3.0 GHz processor, NVIDIA RTX A5000(GPU), Python 3.8.7, Pytorch 2.1.0, and Windows 11 operating system. To evaluate the classification performance of different models, the accuracy, precision, recall, F1 score, receiver operating characteristics (ROC), and area under the curve (AUC) were measured.
Picrosirius Red staining is a tissue-staining method commonly used to observe and analyze collagen fibers. In this staining technique, Picrosirius Red-stained collagen fiber tissues appear green to red in polarized light, and through color deconvolution and normalization, the tissue image can be decomposed into red and green channel images, where the red channel image represents mature collagen fibers and the green channel image represents immature collagen fibers. By combining the red and green channel images, it is possible to combine the information of mature and immature collagen fibers to obtain more comprehensive collagen fiber characteristics. The histopathological images used in this study were derived from a database of Sirius red-stained skin collagen fiber micrographs created by Mascharak et al. [ 17 ]. , which included 1048 red-channel images and 1048 green-channel images. The images cover normal skin and skin tissue images at week 2, month 1, and month 3 after intervention with PBS and verteporfin. In this experiment, we selected 246 microscopic images at specific time points after PBS intervention and 240 microscopic images at specific time points after verteporfin intervention as the scar group images, and 306 images of uninjured skin as the normal group images (including red and green channel images). The raw TIF images were converted to PNG for computer processing. Using the OpenCV add Weighted method, the red and green channel images of each sample were linearly combined with the same weight (0.5) to produce a merged image. After this process, we obtained a new dataset containing 273 micrographs of collagen fibers from normal skin and 123 micrographs from scarred skin. To address the problem of training bias that may result from an insufficient data volume, a data enhancement strategy was employed that included flipping the images vertically and horizontally and rotating them by 90°. Eventually, an augmented dataset was obtained that included 492 scar tissue images and 1092 normal tissue images. Subsequently, all images were normalized to 500*500 pixels. Specific image data information are listed in Table 1 .
Gene expression data from the Gene Expression Omnibus (GEO) database, a public database created and maintained by the National Center for Biotechnology Information (NCBI) of the U.S. National Institutes of Health (NIH), contains millions of gene expression samples from around the world. Researchers can access publicly available gene expression data from the GEO database using data numbers. In this experiment, all the samples were from the GPL570 platform; therefore, the number of gene features contained in each sample was the same (23,521). Information about the source and number of samples in the scarred and normal groups is listed in Table 2 . First, the gene expression profiles of the samples were loaded by data numbering. Data filtering was performed on the gene expression profiles and negative expression levels or obviously noisy data were placed as missing values. Next, the missing values were filled in using the mean value method. The data were then log-transformed to approximately follow a normal distribution. Finally, data standardization operations were performed to remove systematic errors and ensure the reliability of later data analysis. After the above pre-processing, the constructed gene expression matrices of all samples had dimensions of 42 × 23,521 (42 samples × 23,521 genes).
In terms of image unimodal discrimination, using the grid search algorithm and cross-validation method, we observed that the proposed model performed best in discrimination when the learning rate was 0.0001 and the batch size was 64. Therefore, in the hyperparameter configuration of the proposed CNN model, we chose CrossEntropyLoss as the loss function of the discriminative model to improve the convergence speed and performance of the model. Considering the convergence speed and stability of the model, we configured the Adam optimizer and set the learning rate to 0.0001. The batch size of the dataset was set to 64 and the epoch during training was set to 50. Three cascaded 3 × 3 convolutional layers were used for feature extraction, which helped introduce more nonlinear transformations so that the network could better capture the complex patterns and features in the input data. Ablation experiments were designed based on the model structure to investigate the contribution of added cascaded convolutional layers to the discriminative model. We compared the proposed CNN model with the three structural fine-tuning models listed in Table 2 . ResNet_FT1 removed Conv4 and Conv5; ResNet_FT2 removed Conv3, Conv4, and Conv5; and ResNet_FT3 removed the four convolutional layers of the cascade and uses only Stage1-Stage2 of ResNet-50. The hyperparameter configuration of the structural fine-tuning model was the same as that of the proposed image-discrimination model. Our experiments used 492 scar images and 1092 normal images as image datasets, of which 70% were used for training and 30% for testing.
The loss rate variation, accuracy variation, and ROC curves for the test set during training are shown in Fig. 3 . The performance metrics of the comparison models are listed in Table 3 . The experimental results show that the proposed CNN model has the best classification performance among the three structural fine-tuning models, improving the AUC by approximately 30% and the accuracy by 19.83% compared with the convolutional layer without adding four cascades (ResNet_FT3). This shows that the convolutional blocks we incorporated yield good results. Compared to the model with only one channel shrinkage layer (ResNet_FT2), the proposed CNN model improves the AUC by approximately 15% and the accuracy by 9.81%. Compared to the model with only one channel shrinkage layer and one feature extraction layer (ResNet_FT1), the proposed CNN model improves the AUC by approximately 7% and accuracy by 3.92%. The proposed CNN model achieves the highest precision, recall, and F1 score, which indicates that the proposed CNN can better discriminate scar tissue images and reduce the false positive rate. Compared with the other three models, the proposed model has the shortest training practice and the lowest time cost.
Training process of unimodal discriminative models with different fine-tuning structures. ( a ) loss rate curve, ( b ) accuracy curve, and ( c ) ROC curve
In addition, we compared the proposed model with the full migration learning ResNet-50 (ResNet_TL), VGG16 (VGG_TL), AlexNet (AlexNet_TL), and fine-tuned ResNet (ResNet_FT) models to evaluate its classification and feature extraction for scar tissue image performance. During the model training, we used the hyperparameters listed in Table 4 . The loss rate variation, accuracy variation, and ROC curves for the test set during training are shown in Fig. 4 . The performance metrics of the compared models are listed in Table 5 . The experimental results show that the proposed CNN model has the best classification performance among other pre-trained large models, and compared with the original ResNet-50 model (ResNet_TL) and the fine-tuned ResNet-50 model (ResNet_FT), the proposed CNN model improves the accuracy by 3.49% and 0.44%, and the AUC values by approximately 7% and 3%, respectively; the model sizes of ResNet_TL and ResNet_FT are 90 M, whereas the size of the proposed model is 16.8 M, which indicates that the proposed CNN model greatly reduces the computational cost and improves the discriminative accuracy simultaneously. Compared with VGG_TL and AlexNet_TL, the AUC value of the proposed CNN model is still 5% and 4% higher, respectively. In addition, the F1 score reaches the highest value of 98.27%, which indicates that the proposed CNN model can effectively achieve scar tissue classification discrimination while reducing the computing cost, and in the case of limited computing cost, the proposed CNN model has a higher utilization value. In terms of the time cost of model training, the training time of AlexNet_TL was slightly lower than that of the proposed CNN model; however, the proposed CNN model was able to achieve the optimum performance in all other model metrics.
Training process of unimodal discriminative models with different pre-trained macromodels. ( a ) loss rate curve, ( b ) accuracy curve, and ( c ) ROC curve
For the genetic model for scar tissue discrimination, the performance of the model was optimal when the liblinear solver was used, and the regularization ratio was 0.5, as obtained from a grid search of the solver and regularization ratio; therefore, in this study, the penalty parameter of the logistic regression model was set to l1, and it was optimized using the liblinear solver, with the L1 regularization ratio set to 0.5. The addition of the L1 regularization term to the logistic regression model can produce a sparse solution, which can be applied to feature selection to compress the unimportant feature coefficients to zero. Based on this, the optimal hyperparameter configuration was obtained using a grid-search algorithm. Therefore, the optimal hyperparameter configuration can be obtained using a grid-search algorithm. To verify the performance of the proposed logistic regression model with L1 regularization, we compared the proposed gene discrimination model with three fine-tuned models: the penalty parameter of LogisticRegression_FT1 was set to l2, and the regularization ratio was set to 0.5. The penalty parameter of LogisticRegression_FT2 was set to l1, the regularization ratio was set to 0.1, and LogisticRegression_FT3 did not set the regularization term. In this experiment, 19 scar and 23 normal samples were used as the genetic dataset, of which 60% were used for training and 40% for testing. Table 6 lists that the proposed gene-discrimination model had the best discrimination performance among the three fine-tuned models. Compared with the use of the L2 regularization (LogisticRegression_FT1) method, the proposed model improves the accuracy by 17.65% and AUC by approximately 5%, which demonstrates the effectiveness of using the L1 regularization term on the constructed gene expression dataset. When the regularization strength is strengthened, the fine-tuned model (LogisticRegression_FT2) shows a decrease in accuracy, precision, F1 score, and AUC value metrics compared with the proposed model, which proves that the hyperparameters configured in the proposed model have better discriminative performance. Compared with the logistic regression model without regular terms (LogisticRegression_FT3), the proposed model improves discrimination accuracy by 17.65% and the AUC value by approximately 6%, which proves that the inclusion of the L1 regular term in the logistic regression model improves discrimination performance. Compared with the other three models, the proposed model has the shortest training practice and the lowest time cost.
In terms of multimodal discrimination, the performance index of the image feature extraction network on the constructed image dataset (Table 1 ) was 98.27%, and that of the gene feature extraction network on the constructed gene expression dataset (Table 2 ) was 100%. After normalization, we set the weighted average weight of the image feature extraction network to 0.49 and the weighted average weight of the gene feature extraction network to 0.51. The image features extracted by the image feature extraction network were 256, and the gene features extracted by the gene feature extraction network were 33. Therefore, the final multimodal fusion features obtained after weighted average fusion were 289. In the training process, CrossEntropyLoss was used as the loss function. Considering the convergence speed and stability of the model, we configured the Adam optimizer and set the learning rate to 0.001. A sigmoid classifier was used to classify normal and scar tissues with output scores ranging from [0,1]. Pairing the image data with the gene data, a total of 9348 paired samples (19 scar gene samples × 492 scar image samples) were obtained in the scar group, and a total of 25,118 paired samples (23 normal gene samples × 1092 normal image samples) were obtained in the normal group. The two groups of paired samples formed a multimodal dataset, of which 70% was used for training and 30% for testing.
To verify the contribution of the constructed unimodal feature-extraction network to the multimodal discriminative model, we designed an ablation experiment based on the structure of the image feature-extraction network. We compared the proposed multimodal discriminative model with the three structural fine-tuning models listed in Table 6 : Fusion_FT1 removes Conv4 and Conv5 of the image feature extraction network; Fusion_FT2 removes Conv3, Conv4, and Conv5 of the image feature extraction network; Fusion_FT3 removes the cascading four convolutional layers and only Stage1-Stage2 of the image feature extraction network are used; Fusion_FT4 changes the model weight coefficients \({S}_{1}\) and \({S}_{2}\) of the weighted aggregation network, setting \({S}_{1}\) to 0.6 and \({S}_{2}\) to 0.4; and Fusion_FT5 de-emphasizes the weighted average aggregation network and directly splices the obtained multimodal features. The hyperparameter configuration of the structural fine-tuning model is the same as that of the image discrimination model proposed in this study.
The loss rate variation, accuracy variation, and ROC curves for the test set during training are shown in Fig. 5 . The performance metrics of the compared models are listed in Table 7 . The experimental results show that the proposed multimodal discriminative model exhibits the best classification performance among three structurally fine-tuned multimodal discriminative models. Compared with Fusion_FT1, the proposed multimodal discriminative model improved the AUC by approximately 2% and the accuracy by 1.47%. This indicates that the trained feature extraction network achieves good feature extraction. Compared to the Fusion_FT2 and Fusion_FT3 fine-tuning models, the proposed multimodal discrimination model has the highest precision and recall. This indicates that the incorporation of the feature extraction network is capable of extracting high-level features of the image, which has a very positive effect on multimodal feature fusion discrimination. Compared with Fusion_FT4 and Fusion_FT5, the proposed multimodal discriminative model has an AUC of 0.97. However, in terms of accuracy, precision, recall, and F1 score, the proposed multimodal discriminative model can achieve the highest discriminative standard, which proves the effective role played by the weighted average linear network in feature fusion and improves the discriminative performance.
Multimodal discriminant model training results. ( a ) loss rate curve, ( b ) accuracy curve, and ( c ) ROC curve
For genetic association analysis of the scar tissue, we used the Matfiber and Haralick algorithms to extract 29 different textural features from the collagen fiber micrographs. These features provide a powerful toolkit for us to deeply understand the intricate microstructure of skin tissues. These rich textural attributes helped us conduct multifaceted explorations and reveal the inherent differences and properties of various skin tissue types. For the processing of gene expression data, this study involved 42 samples, each containing 23,521 genes. First, we performed a log2 transformation of the genes in all samples to enhance the centralization of the data and facilitate subsequent calculations. Next, the ComBat method was used to remove batch effects from all samples to eliminate potential effects between batches and obtain the final preprocessed gene expression data. This study combined image feature data with preprocessed gene expression data under the guidance of the MCJNMF algorithm. By strategically choosing the parameters L1 = 0.001, r1 = 1, r2 = 1, K = 7, and a = 0.001 [ 15 ], seven common modules were successfully extracted from the combined dataset. The feature information of each module is listed in Table 8 . This integrated approach provides a panoramic view by interweaving image features with gene expression features, revealing the intricate tapestry and intrinsic diversity of skin tissues.
This study demonstrated that the designed multi-functional scar tissue discrimination platform can accurately classify unimodal and multimodal input data, achieving objective scar tissue discrimination. To explore the mechanism of scar tissue formation in detail, in subsequent analyses, the representation of collagen fibers in scar tissue and normal tissue at the macroscopic and imaging genetics levels was further explored. In the macroscopic level analysis, we characterized the density and orientation of collagen fibers and chose the channel 32 image of the conv1 layer in the proposed CNN model as the feature extraction channel image (which stems from the model architecture and domain knowledge, as the conv1 layer enhances the responsiveness to texture features to some extent). Figure 6 shows the density and orientation characterization maps of the collagen fibers in scar tissue and normal tissue. In addition, the density, circular standard deviation, and angular deviation of collagen fibers in the two groups were statistically analyzed (Fig. 7 ), which shows that these texture features are distinctly different between the scar tissue and normal skin tissue. The collagen fibers in scarred skin were significantly denser and more densely arranged than those in normal skin. This is consistent with the biological changes that occur during the healing process of scar tissue and provides a direct underlying biological mechanism for the scar tissue discrimination platform that was constructed. In addition, statistical analyses were performed to quantify the textural differences in collagen fiber characteristics in terms of alignment strength and angular deviation. Scarred skin had a smaller circular standard deviation and angular deviation, indicating that the collagen fibers tended to be centrally distributed and aligned. In contrast, collagen fibers in normal skin were dispersed in multiple directions. These statistical analyses provided objective quantitative evidence of textural differences between scar tissue and normal skin.
Density characterization and arrangement characterization of collagen fibers in scarred and normal groups
Collagen fiber characterization results. ( a ) Statistical analysis of collagen fiber density. ( b ) Statistical analysis of collagen fiber circular standard deviation. ( c ) Statistical analysis of collagen fiber angular deviation
DO enrichment analysis of the genes in the seven common modules revealed that the genes in module 4 were associated with collagen diseases, rheumatism, and systemic scleroderma (Fig. 8 (a)). Such conditions stem from aberrant collagen fiber synthesis, organization, or perturbations in collagen fiber-associated cellular signaling. This finding underscores the potential significance of module 4 in disease-linked biological processes, offering insights into the mechanisms underlying skin scarring. This unveils the pivotal role that genes within module 4 play in the context of disease-affected scarred skin and reorganized collagen fibers in normal skin, signifying their potential involvement in disease pathogenesis. This lends credence to the notion that our gene set is intrinsically linked to collagen fiber-associated biological processes and diseases. Further examination via GO enrichment analysis revealed associations with terms like “blood vessel diameter maintenance,” “regulation of tube size,” “vascular process in the circulatory system,” and “regulation of vasoconstriction,” underscoring these genes’ involvement in regulating vascular and tubular structures, thus maintaining circulatory system functionality (Fig. 8 (b)). This hints at the potential role genes in module 4 play in controlling collagen fiber density and arrangement, ultimately contributing to skin tissue function and homeostasis. This closely aligns with the distribution and characteristics of collagen fibers in scarred and normal skin. KEGG enrichment analysis demonstrated pathways linked to collagen fibers, including the “Calcium signaling pathway” and “MAPK signaling pathway” (Fig. 8 (c)). The Calcium signaling pathway is pivotal in extracellular matrix synthesis, tissue structure upkeep, and cellular signaling of collagen fibers [ 19 , 20 ]. Similarly, the MAPK signaling pathway regulates collagen fiber synthesis, catabolism, cell proliferation, and apoptosis, all of which influence collagen fiber-associated functions and morphological attributes in scarred and normal skin [ 17 , 18 , 21 ]. These findings provide vital clues for understanding the molecular mechanisms underlying collagen fiber synthesis and tissue regulation.
Results of enrichment analysis of co-expressed genes from module 4. ( a ) DO enrichment results. ( b ) GO enrichment results. ( c ) KEGG enrichment results
We identified potential biomarker genes associated with scar tissue through meticulous sequencing of differential and co-expression module analyses. This process commenced with the identification of 417 differential genes through differential analysis. These discrepancies in gene expression between scar and normal tissues are potentially intertwined with physiological and pathological collagen fiber-related processes. Using MCJNMF-based multimodal data association analysis, we identified seven co-expression modules, with module 4 emerging as correlated with collagen diseases and skin disorders. This insight was reinforced through enrichment analysis of 1212 genes within module 4. An intersection operation involving 417 differential genes and module 4 genes yielded 19 potential biomarker genes (Fig. 9 (a)). Further refining this selection through ROC curve analysis, we identified 11 potential marker genes characterized by AUC values exceeding 0.5 (Fig. 9 (b)). An in-depth biological assessment of these marker genes revealed their diverse involvement in processes related to scarred skin. For instance, TRIM59-encoded proteins may modulate cell cycle and apoptosis, potentially affecting collagen fiber production and repair [ 22 ]. TBC1D9-encoded proteins involved in intracellular membrane trafficking may regulate collagen fiber synthesis and distribution. These findings hint at their pivotal roles in biological processes linked to scarred skin.
( a ) Volcano maps of intersecting genes in module 4 and differentially expressed genes. ( b ) ROC curves for potential marker genes
In summary, these potential marker genes play roles in diverse cellular processes and pathways encompassing the cell cycle, intracellular membrane transport, immune regulation, and cell signaling. These processes are intricately linked to collagen fiber generation, repair, and regulation. This highlights the substantial involvement of these potential marker genes in the biological progression of scarred skin and provides invaluable insights into the molecular mechanisms underlying collagen fiber-related disorders.
In this study, we successfully established a versatile discriminative platform for identifying scar samples that synergistically integrates a residual network-based CNN model, a logistic regression model with L1 regularization, and a multimodal feature fusion technique with a weighted average aggregation network for both unimodal and multimodal data inputs. In addition, the characterization of collagen fiber features extracted from 32-channel images of the conv1 layer in the proposed CNN model revealed significant changes in the density and arrangement of collagen fibers in the scarred skin. This dynamic change suggests that the microstructural properties of collagen fibers in scarred skin are altered depending on the disease state, providing insights into the intricate biological properties of these fibers. DO, GO, and KEGG enrichment analyses played key roles in identifying genes that were closely associated with collagen fibers. The DO enrichment analysis highlighted the close association of module 4 with various diseases associated with irregular collagen fibers, such as collagen disease, rheumatism, and systemic scleroderma. These findings strongly corroborate the results of our genetic screening efforts, reinforcing the biological significance of the identified genes and their relevance to the disease. GO enrichment findings highlighted the pivotal contribution of these genes to regulating vascular and ductal structures, maintaining circulatory system functions, and other vital biological processes. The KEGG enrichment results highlighted the critical roles of these genes in collagen fiber synthesis, extracellular matrix regulation, and cellular signaling. This underscores their profound involvement in scarring, underscoring their regulatory roles in collagen fiber shifts within scarred and normal skin. Furthermore, our selection of potential biomarker genes related to collagen fibers within the scar tissue derived from module 4 and differentially expressed genes revealed a diverse array of biological functions. Delving into the biological roles of these potential markers, it is evident that these genes participate in multiple biological processes, such as cell cycle regulation, intracellular membrane transport, immune regulation, and cell signaling. These functions are intricately connected to the creation, repair, and control of collagen fibers. This not only offers cues for delving deeper into the molecular mechanisms of diseases related to scarred skin but also provides promising molecular targets for future therapeutic strategies.
In conclusion, this study establishes a versatile platform for scar tissue discrimination and makes an important contribution to unraveling the molecular basis of collagen fiber-related diseases. We strongly believe that these findings will provide new approaches for the treatment, diagnosis, and prevention of skin scarring and valuable references for broader biomedical research efforts.
Here we also present the limitations of our current work. First, the proposed multimodal discriminant model has currently only been validated on customized multimodal datasets, but has not yet fully considered the matching of multimodal data. To optimize this, future research should focus on the matching of image data and gene expression data. It is crucial to construct more complete and reliable datasets to ensure the reliability and validity of the platform in clinical applications. Second, in addition to further research in the field of computer-aided diagnosis, future work will focus on the bio-experimental validation of the pathogenic mechanisms of scarring. The 11 potential targets of scar pathogenesis identified in this study will be important for future research. Relevant biological experiments will help to validate the exact roles and mechanisms of these targets in the process of scar formation. To this end, cellular and animal models of scar tissue will be established to simulate the biological process of scar formation to provide a reliable experimental platform for validation. In-depth study of the functions of these biomarker genes will explore their roles and regulatory mechanisms in the process of scar formation, providing new theoretical and practical support for scar treatment. In addition, combining the results of biological experimental validation with clinical practice will advance the clinical translation of research results and provide more effective treatment and management programs for scar patients.
The gene datasets (GSE63107/GSE92566/GSE162904/GSE8056/GSE7890) used in this work are available for download from GEO ( https://www.ncbi.nlm.nih.gov/geo/ ). Image datasets can be found on the website ( https://github.com/shamikmascharak/Mascharak-et-al-ENF ). The code for this study can be found on GitHub( https://github.com/xiaoqianhu1/Scar-discrimination-model.git ).
Lin X, Lai Y (Feb. 2024) Scarring skin: mechanisms and therapies. Int J Mol Sci 25(3):1458. https://doi.org/10.3390/ijms25031458
Foster DS et al (2021) Integrated spatial multiomics reveals fibroblast fate during tissue repair. Proc Natl Acad Sci U S A 118(41). https://doi.org/10.1073/pnas.2110025118
Alharbi F, Vakanski A (2023) ‘Machine Learning Methods for Cancer Classification Using Gene Expression Data: A Review’, Bioengineering , vol. 10, no. 2, Art. no. 2, Feb. https://doi.org/10.3390/bioengineering10020173
Gupta S, Gupta MK, Shabaz M, Sharma A (Sep. 2022) Deep learning techniques for cancer classification using microarray gene expression data. Front Physiol 13:952709. https://doi.org/10.3389/fphys.2022.952709
Fomovsky GM, Holmes JW (2010) ‘Evolution of scar structure, mechanics, and ventricular function after myocardial infarction in the rat’, Am. J. Physiol.-Heart Circul. Physiol , vol. 298, no. 1, pp. H221–H228, Jan. https://doi.org/10.1152/ajpheart.00495.2009
Pham TTA, Kim H, Lee Y, Kang HW, Park S (2021) Deep Learning for Analysis of Collagen Fiber Organization in Scar tissue. IEEE Access 9:101755–101764. https://doi.org/10.1109/ACCESS.2021.3097370
Article Google Scholar
A RS, Chamola V, Hussain Z, Albalwy F, Hussain A (2024) ‘A novel end-to-end deep convolutional neural network based skin lesion classification framework’, Expert Systems with Applications , vol. 246, p. 123056, Jul. https://doi.org/10.1016/j.eswa.2023.123056
Hekler A et al (2019) Oct., ‘Superior skin cancer classification by the combination of human and artificial intelligence’, European Journal of Cancer , vol. 120, pp. 114–121, https://doi.org/10.1016/j.ejca.2019.07.019
Hilal AM et al (2022) ‘Feature Subset Selection with Optimal Adaptive Neuro-Fuzzy Systems for Bioinformatics Gene Expression Classification’, Comput. Intell. Neurosci , vol. p. 1698137, May 2022, https://doi.org/10.1155/2022/1698137
Lavanya K, Rambabu P, Suresh GV, Bhandari R (2023) ‘Gene expression data classification with robust sparse logistic regression using fused regularisation’, International Journal of Ad Hoc and Ubiquitous Computing , vol. 42, no. 4, pp. 281–291, Jan. https://doi.org/10.1504/IJAHUC.2023.130470
Elbashir MK, Ezz M, Mohammed M, Saloum SS (2019) Lightweight convolutional neural network for breast Cancer classification using RNA-Seq gene expression data. IEEE Access 7:185338–185348. https://doi.org/10.1109/ACCESS.2019.2960722
Ghoniem RM, Algarni AD, Refky B, Ewees AA (2021) ‘Multi-Modal Evolutionary Deep Learning Model for Ovarian Cancer Diagnosis’, Symmetry , vol. 13, no. 4, p. 643, Apr. https://doi.org/10.3390/sym13040643
Cai M et al (Mar. 2023) A progressive phased attention model fused histopathology image features and gene features for lung cancer staging prediction. Int J CARS 18(10):1857–1865. https://doi.org/10.1007/s11548-023-02844-y
Wang W, Kong W, Wang S, Wei K (2022) ‘Detecting Biomarkers of Alzheimer’s Disease Based on Multi-constrained Uncertainty-Aware Adaptive Sparse Multi-view Canonical Correlation Analysis’, J Mol Neurosci , vol. 72, no. 4, pp. 841–865, Apr. https://doi.org/10.1007/s12031-021-01963-y
Deng J, Zeng W, Kong W, Shi Y, Mou X, Guo J (2020) ‘Multi-Constrained Joint Non-Negative Matrix Factorization With Application to Imaging Genomic Study of Lung Metastasis in Soft Tissue Sarcomas’, IEEE Transactions on Biomedical Engineering , vol. 67, no. 7, pp. 2110–2118, Jul. https://doi.org/10.1109/TBME.2019.2954989
Lee DD, Seung HS (1999) ‘Learning the parts of objects by non-negative matrix factorization’, Nature , vol. 401, no. 6755, pp. 788–791, Oct. https://doi.org/10.1038/44565
Jia Y-L, Liu X-J, Wen H, Zhan Y-P, Xiang M-H (2019) ‘The expression of MAPK signaling pathways in conjunctivochalasis’, Int. J. Ophthalmol , vol. 12, no. 11, pp. 1801–1806, Nov. https://doi.org/10.18240/ijo.2019.11.21
Yang C-C et al (2023) Sep., ‘17β-estradiol inhibits TGF-β-induced collagen gel contraction mediated by human Tenon fibroblasts via Smads and MAPK signaling pathways’, Int. J. Ophthalmol , vol. 16, no. 9, pp. 1441–1449, https://doi.org/10.18240/ijo.2023.09.10
Lu T et al (2023) Sep., ‘MDFI regulates fast-to-slow muscle fiber type transformation via the calcium signaling pathway’, Biochem. Biophys. Res. Commun , vol. 671, pp. 215–224, https://doi.org/10.1016/j.bbrc.2023.05.053
Attwaters M, Hughes SM (2022) ‘Cellular and molecular pathways controlling muscle size in response to exercise’, FEBS J , vol. 289, no. 6, pp. 1428–1456, Mar. https://doi.org/10.1111/febs.15820
Shang G-K et al (2020) Aug., ‘Sarcopenia is attenuated by TRB3 knockout in aging mice via the alleviation of atrophy and fibrosis of skeletal muscles’, J. Cachexia Sarcopenia Muscle , vol. 11, no. 4, pp. 1104–1120, https://doi.org/10.1002/jcsm.12560
Zhang P, Zhang H, Wang Y, Zhang P, Qi Y (May 2019) Tripartite motif-containing protein 59 (TRIM59) promotes epithelial ovarian Cancer progression via the Focal Adhesion Kinase(FAK)/AKT/Matrix metalloproteinase (MMP) pathway. Med Sci Monit 25:3366–3373. https://doi.org/10.12659/MSM.916299
Download references
The authors declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by the Natural Science Foundation of Shanghai(No.18ZR1417200).
Xiaoqian Hu and Yaling Yu contributed equally to this work and share first authorship.
Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
Xiaoqian Hu, Yaling Yu & Gen Wen
College of Information Engineering, Shanghai Maritime University, 1550 Haigang Ave, Shanghai, 201306, P. R. China
Xiaoqian Hu, Wei Kong & Shuaiqun Wang
Institute of Microsurgery on Extremities, Shanghai Sixth People’s Hospital, Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
You can also search for this author in PubMed Google Scholar
Xiaoqian Hu: Conceptualization, Data curation, Formal analysis, Methodology, Software, Validation, Writing– Original Draft Preparation. Yaling Yu: Investigation, Writing-review & editing, Supervision. Wei Kong: Methodology, Writing—Original Draft Preparation, Investigation, Data Curation, Validation. Shuaiqun Wang: Visualization, Validation. Gen Wen: Conceptualization, Methodology, Project Administration, Funding Acquisition.
Correspondence to Wei Kong or Gen Wen .
Conflict of interest.
The authors do not have any possible conflicts of interest.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Hu, X., Yu, Y., Kong, W. et al. Multi-functional scar tissue discrimination platform construction and exploration of molecular mechanism for scar formation. Appl Intell (2024). https://doi.org/10.1007/s10489-024-05625-5
Download citation
Accepted : 15 June 2024
Published : 16 August 2024
DOI : https://doi.org/10.1007/s10489-024-05625-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Loop detection method based on neural radiance field bow model for visual inertial navigation of uavs, 1. introduction, 2. related work, 2.1. deep learning-based loop closure detection methods, 2.2. bow model-based loop closure detection methods, 2.3. neural radiance fields, 3.1. keyframe feature point extraction, 3.2. construction and selection of virtual images based on nerf, 3.2.1. colmap estimates camera poses and instant-ngp scene reconstruction, 3.2.2. virtual view construction.
3.3. cosine similarity calculation based on term frequency weight vectors and loop determination, 3.3.1. construction of term frequency weight vectors and cosine similarity scoring, 3.3.2. dynamic weight allocation and loop closure determination, 4. experiment, 4.1. dataset and server information, 4.2. experiment on loop closure detection, 4.3. navigation and localization experiment, 4.4. system running time statistics and algorithm complexity evaluation, 4.4.1. statistics on lcd method running time, 4.4.2. statistics on memory required for lcd operation, 4.5. parametric sensitivity analysis, 5. conclusions, author contributions, data availability statement, conflicts of interest, abbreviations.
LCD | Loop Closure Detection |
UAV | Unmanned Aerial Vehicle |
VINS | Visual–Inertial Navigation System |
BoW | Bag-of-Words |
NeRF | Neural Radiance Fields |
IMU | Inertial Measurement Unit |
VIO | Visual Inertial Odometry |
SLAM | Simultaneous Localization And Mapping |
Instant-NGP | Instant Neural Graphics Primitives |
CNN | Convolutional Neural Network |
LRO | Local Relative Orientation |
TSNE | T-distributed Stochastic Neighbor Embedding |
VGG16 | Visual Geometry Group 16 |
LBD | Line Band Descriptor |
CUDA | Computer Unfied Device Architecture |
FSRCNN | Fast Super-Resolution Convolutional Neural Network |
TF-IDF | Term Frequency–Inverse Document Frequency |
PSNR | Peak Signal-to-Noise Ratio |
SSIM | Structural Similarity Index Measure |
APE | Absolute Pose Error |
EVO | Evaluation of Odometry |
RSS | Resident Set Size |
Click here to enlarge figure
Visual–Inertial Sensor Unit | Ground Truth | Calibration | Server Information |
---|---|---|---|
Stereo Image (Aptina MT9V034 shutter, WVGA monochrome, 2 × 20 FPS) | Vicon motion capture system | Camera intrinsics | GPU: GeForce RTX3090, 24 G |
MEMS IMU (ADIS16448, 200 Hz) | Leica MS50 MultiStation | Camera-IMU extrinsics | CPU: AMD EPYC 7542 32-core processor, 681 G |
Spatio-temporally aligned ground truth |
Maplab | Colmap | Vicon2gt | |
---|---|---|---|
PSNR mean | 6.6649 | 9.2680 | 8.8081 |
SSIM mean | 0.0931 | 0.1789 | 0.1663 |
Number of Detections (Times) | Accuracy Rate | Number of Additional Detection (Times) | |
---|---|---|---|
BoW LCD method | 121 | 100% | |
NeRF-based BoW Model LCD Method | 179 | 100% | 58 |
Number of Detections (Times) | Accuracy Rate | Number of Additional Detection (Times) | |
---|---|---|---|
BoW LCD method | 104 | 100% | |
NeRF-based BoW Model LCD Method | 171 | 100% | 67 |
Max Error (m) | Min Error (m) | Mean of Error (m) | Rmse of Error (m) | |
---|---|---|---|---|
VINS-Mono | 0.79 | 0.01 | 0.15 | 0.18 |
NeRF+VINS-Mono | 0.19 | 0.005 | 0.08 | 0.09 |
Max Error (m) | Min Error (m) | Mean of Error (m) | Rmse of Error (m) | |
---|---|---|---|---|
ORB-SLAM3 | 0.0445 | 0.00098 | 0.0157 | 0.0171 |
NeRF+ORB-SLAM3 | 0.0044 | 0.00009 | 0.0015 | 0.0017 |
BoW | NeRF + BoW | |
---|---|---|
VINS-Mono | 186.24 s | 198.94 s |
ORB-SLAM3 | 207.00 s | 216.23 s |
BoW | NeRF + BoW | |
---|---|---|
VINS-Mono | 1,429,204 kbytes | 1,434,624 kbytes |
ORB-SLAM3 | 751,636 kbytes | 771,626 kbytes |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Zhang, X.; Cui, Y.; Ren, Y.; Duan, G.; Zhang, H. Loop Detection Method Based on Neural Radiance Field BoW Model for Visual Inertial Navigation of UAVs. Remote Sens. 2024 , 16 , 3038. https://doi.org/10.3390/rs16163038
Zhang X, Cui Y, Ren Y, Duan G, Zhang H. Loop Detection Method Based on Neural Radiance Field BoW Model for Visual Inertial Navigation of UAVs. Remote Sensing . 2024; 16(16):3038. https://doi.org/10.3390/rs16163038
Zhang, Xiaoyue, Yue Cui, Yanchao Ren, Guodong Duan, and Huanrui Zhang. 2024. "Loop Detection Method Based on Neural Radiance Field BoW Model for Visual Inertial Navigation of UAVs" Remote Sensing 16, no. 16: 3038. https://doi.org/10.3390/rs16163038
Further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
IMAGES
COMMENTS
The construction methodology or execution planning services will be provided in accordance with contract requirements. The level I works master schedule will be based on the tender scheme program and will be submitted initially after the contract award. ... The proposed arrangements will be submitted in due course. The tower crane would be ...
Below you can find a complete Project Specific Construction Methodology. Following are the sections or table of content included in the document: 1.0 Civil Works 2.0 Block Work 3.0 Plastering 4.0 Ceramic Floor and Wall Tiles - Granite and Terrazzo Tiles 5.0 Painting 6.0 Aluminium 7.0 Suspended Ceilings 8.0 External Works 9.0 Mechanical, Electrical &…
Tender for Proposed Rixs Creek Rail Loop & Associated Infrastructure Construction Methodology - Page 1 1.0 Introduction & Project Scope 1.1 Introduction Rixs Creek Mine is proposing the construction of a 5.6km Rail Loop which departs from the Northern Line at approximately 223km 500m. A new train load-out facility and reclaim tunnel would be
M2 Upgrade Environmental Assessment. 7. Construction methodology and staging. This section provides an overview of the construction methodology and staging. It provides a description of proposed construction compound locations, site access and service relocations. Director-General's Requirements.
9. Construction Planning 9.1 Basic Concepts in the Development of Construction Plans. Construction planning is a fundamental and challenging activity in the management and execution of construction projects. It involves the choice of technology, the definition of work tasks, the estimation of the required resources and durations for individual tasks, and the identification of any interactions ...
Of the many different project management methodologies suitable for construction work, these six have proven the most effective. Advanced Work Packaging (AWP), The Critical Path Method (CPM), Critical Chain Project Management (CCPM), Lean Project Management , The Project Management Book of Knowledge (PMBOK), and. The Waterfall Method.
Writing a methodology for a construction or engineering bid in the UK. Most construction industry professionals generally agree that the price and methodology are both critical factors for writing a successful bid, tender or proposal and securing major contracts.
Here are some construction planning tips to make sure you create the best possible construction plan. 1. Assemble the Right Team. Not every construction project is the same; therefore, the team you assemble to execute the project should have the experience and skillset to do the work properly.
The third refers to new technologies which can be adopted to construct the facility, such as new equipment or new construction methods. ... and proposed an array of appropriate structural systems for steel buildings of specified heights as shown in Figure 3-1. By choosing an appropriate structural system, an engineer can use structural ...
Six of the most common project delivery methods in construction are Design-Bid-Build (D-B-B), Design-Build (D-B), Construction Manager at Risk (CMAR), Construction Management Multi-Prime (CMMP), Public-Private Partnership (PPP or P3), and Integrated Project Delivery (IPD). Choosing the right project delivery method is a crucial step as it sets ...
Preface. This document is an introductory guide for owners who face the choice of delivery methods for their projects, and for the construction and program managers whose role it is to advise owners and to manage the design and construction process utilizing the most appropriate method. While not intended to be an exhaustive analysis of each ...
1) Understand the client's needs. Before you dive into writing, take the time to fully understand your client's needs and the scope of the project. This understanding will be the foundation of your proposal, guiding you to tailor your approach and solutions to meet those specific needs.
6.1 This chapter of the Environmental Statement (ES) describes the anticipated construction methodology and phasing of the Development. Consideration of the likely significant effects on the environment that may arise during the construction of the Development and any necessary mitigation measures are provided within Chapters 7 to 14 of this ES ...
Construction Project Management involves bringing together various resources, including labor, materials, equipment and technology, while managing diverse stakeholders, such as clients, architects, engineers, contractors and regulatory authorities to deliver a completed project. In this article, we delve into the essentials of construction ...
5. Materials. The material involves should be accounted for and should be described on how to properly handle it. 6. Methodology and Sequence of Work. This is the heart of Method Statement. It includes the detailed step by step procedure and a guide on how a certain activity is to be done and accomplish. 7. Program.
Proposed Methodology. The proposed methodology section of your construction project proposal details your approach to executing the project. It outlines the specific processes, methodologies, and best practices that will be employed to ensure a successful and efficient construction process. Start by discussing the initial planning phase.
Waterfall is the oldest and still most widely used construction project management method that relies on detailed planning to create a highly structured sequence of events that lead to successful project completion. While the waterfall method is in many ways ideal for construction projects, it can be inflexible when addressing project changes ...
The purpose of this construction methodology is to specify the requirements of civil building construction activities including excavation, backfilling, and allied activities complying with the contract documents, project specifications as per the scope of work and approved drawings/documents. This method statement also covers requirements for earthworks associated with trenches for pipelines ...
The critical path method (CPM) in construction is a classic approach that remains one of the most-used methodologies in construction project management today. In this approach, known as critical path scheduling, construction managers break down every step in the process and seek to map out a pathway to complete the project with efficiency and ...
The results obtained allowed the identification of current practices, process problems, and information involved, and the knowledge required for carrying out the selection of construction methods for a construction project. A map of the knowledge associated with this process is proposed.
1. Introduction. Given the impact construction methods have on productivity, quality, and cost, their selection is a key decision for the proper development of a construction project, and it is one of the main factors affecting the productivity and efficiency of construction projects [].Also, it is considered as one of the five potential areas of productivity loss according to the European ...
PROPOSED METHOD This section describes the proposed graph-based method to classify construction method patterns. Figure 3 summarises the steps of the proposed method starting from data pre-processing to formulating the feature matrix and the adjacency matrix before building a graph. The proposed method was validated on earthwork and
Construction methods are the building practices professionals use when creating houses, offices and other buildings. The construction method a team of professionals decides to use often depends on factors such as costs, the materials available, the expertise of the construction team and the building's location.
TIA provides a wide range of real-world applications, from modular construction in architectural design to potential solutions for sound insulation. Various methods to construct TIA have been proposed in the literature. In this paper, the approach of constructing TIA by applying the Escher trick to tilings of orientable surfaces is discussed.
To solve these problems, in this study, a skin scar recognition platform based on advanced deep learning and a weighted aggregation network fusion method is proposed. It is implemented using a residual network-based CNN model and a logistic regression model with L1 regularization and is suitable for both unimodal and multimodal data.
The proposed prediction models are based on recurrent neural networks (RNNs), long short-term memory (LSTM), vanilla LSTM, Stacked LSTM, Bi-LSTM, and encoder-decoder LSTM networks. Moreover, a methodology is presented to guide the construction of the prediction model, encompassing raw data processing, model design and optimization, and neural ...
The loop closure detection (LCD) methods in Unmanned Aerial Vehicle (UAV) Visual Inertial Navigation System (VINS) are often affected by issues such as insufficient image texture information and limited observational perspectives, resulting in constrained UAV positioning accuracy and reduced capability to perform complex tasks. This study proposes a Bag-of-Words (BoW) LCD method based on ...