The comparison of experiments and present calculations based on absorptivity is given in Fig. The details of this probabilistic model can be found in the previous study of the authors Therefore, the computational cost is further reduced by considering the wavelength as an input variable of the surrogate model and calculating EQE accordingly.
Once an optimization study was carried out for a base case, the present method can be used to optimize a solar cell structure with the same geometry but different materials. For this purpose, a base case and transfer cases are selected as follows:. These materials are widely used in thin film solar cells. TL-1 case was previously designed and optimized by the authors 36 , 42 using surrogate based and direct optimization methods.
TL-2 is optimized for the first time. As shown in Fig. Then the hidden layer of the trained model is transferred to other cases. In the next section, the results are presented for the base case and transfer cases with emphasis on the training and validation performances mean squared error.
We will also discuss computational cost as the required number of simulation iterations. The training of the base case is done using data points with of them used as the training set and the rest for validation.
Transport Properties and Finite Size Effects in β -Ga 2 O 3 Thin Films | Scientific Reports
The number of neurons in the hidden layer is determined based on the principle of minimum validation error as follows: The in-sample and out-sample errors are recorded as the number of neurons in the hidden layer is increased and the network configuration providing the minimum out-sample error is selected to be used in the optimization. This procedure is repeated 10 times to eliminate the possibility of training algorithm being trapped in local optima. Optimization is also repeated 10 times using all NN models obtained. This results in 10 possible optimal points. These points are run throught the high-fidelity FDTD model, and the highest function value is selected accordingly.
The number of neurons in the hidden layer for the base case is selected as 12 based on the results in Fig.
Then the optimization is done using the NN models with 12 neurons using all the generated models. The evolution of EQE during surrogate-based optimization iterations is presented in Fig.
Connect with us
In order to demonstrate the proposed approach, two material sets different from the base case are considered. First, the same steps as in the base case are followed without the transfer learning framework as a comparison. In these cases, data points are used where of them are used as the training set and the rest is used for validation.
- Petit livre noir des psychothérapies américaines en France : Théories et pratiques (Psycho-Logiques) (French Edition).
- Minna Långström?
- Thermal transport goes ballistic across 2D thin films – Physics World.
- From God To Us Revised and Expanded: How We Got Our Bible.
- Minna Långström.
- Thin Skin Event.
- Flexible thin film encapsulation and barriers: evolution of technology | IDTechEx Research Article!
The prediction performances using transfer learning are presented and compared with the traditional method in Fig. Furthermore, although the improvement in TL-2 case is not as significant as in TL-1, using the transfer layer reduces error to almost half of the TL-2 no TL. The reason of this less significant improvement is that the validation error of TL-2 case without transfer layer is similar to that of the base case.
As can be seen from Fig. Therefore the relation between the deviation between errors of transfer and base cases suggest that the more accurate the base case is, the more the validation error is reduced. Furthermore, if the base case is less accurate than the transfer cases, prediction performance can even become worse. This is known as negative transfer which is an undesirable phenomenon in transfer learning applications.
The effect of negative transfer on prediction accuracy is illustrated in Fig. As seen from Fig.
The results are compared with the previous optimization studies for the same 5-layer a-Si solar cell 36 , The results obtained using transfer learning are in a good agreement with the direct optimization results for both cases. The optimized geometry in TL-1 case is also very close to the results from the previous study In the other study 42 , a regression-tree based optimizer is used as well as simulated annealing on direct FDTD simulations to find the optimal solution.
However, since the objective function in this study 42 is slightly different than the present objective function, a deviation between the results of these two studies is expected. The present study achieved a slightly higher EQE than that the previous result In this paper, a novel methodology of multilayer neural network based transfer optimization for design problems was presented.liderfl.com/includes
New interaction between thin film magnets discovered
The proposed method was applied to a case study where a multilayer thin film solar cell was to be optimized for the best external quantum efficiency. The results showed that the prediction accuracy can be improved using transfer learning. Furthermore, the number of high fidelity function evaluations during surrogate based optimization can be decreased without sacrificing the accuracy. The datasets generated during the current study are available from the corresponding author on reasonable request.
Insights on transfer optimization: Because experience is the best teacher. IEEE Trans. Quattoni, A. Transfer learning for image classification with sparse prototype representations. Computer Vision and Pattern Recognition , 1—8 Raina, R. Self-taught learning: Transfer learning from unlabeled data. Fung, G. Text classification without negative examples revisit. Data Eng. Dai, W. Boosting for transfer learning.
Zanini, P. Transfer learning: a Riemannian geometry framework with applications to brain-computer interfaces. Waytowich, N. Spectral transfer learning using information geometry for a user-independent brain-computer interface. Choi, K. Lin, Y. Improving EEG-based emotion classification using conditional transfer learning. Pardoe, D. Boosting for regression transfer. Jamshidi, P. Transfer learning for improving model predictions in highly configurable software.
Lindner, C. Adaptable landmark localisation: Applying model transfer learning to a shape model matching system. In Lecture Notes in Computer Science , — Gao, J. Hagan, M. Neural network design. PWS Publishing Company, Were, K. A comparative assessment of support vector regression, artificial neural networks, and random forests for predicting and mapping soil organic carbon stocks across an Afromontane landscape.
Iwata, T. Improving output uncertainty estimation and generalization in deep learning via neural network Gaussian processes. Yosinski, J.