Please use this identifier to cite or link to this item: http://cmuir.cmu.ac.th/jspui/handle/6653943832/74764
Full metadata record
DC FieldValueLanguage
dc.contributor.authorParavee Maneejuken_US
dc.contributor.authorTorben Petersen_US
dc.contributor.authorClaus Brenneren_US
dc.contributor.authorVladik Kreinovichen_US
dc.date.accessioned2022-10-16T06:49:00Z-
dc.date.available2022-10-16T06:49:00Z-
dc.date.issued2022-01-01en_US
dc.identifier.issn21984190en_US
dc.identifier.issn21984182en_US
dc.identifier.other2-s2.0-85135508602en_US
dc.identifier.other10.1007/978-3-030-97273-8_14en_US
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85135508602&origin=inwarden_US
dc.identifier.urihttp://cmuir.cmu.ac.th/jspui/handle/6653943832/74764-
dc.description.abstractIn many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1–10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are only approximately inverse to each other. As a result, each time we transform back and forth, we add new approximation error—and the accumulated error may become significant. In this paper, we show how we can avoid this accumulation. Specifically, we show how to train A-to-B and B-to-A neural networks so that the resulting transformations are (almost) exact inverses.en_US
dc.subjectComputer Scienceen_US
dc.subjectDecision Sciencesen_US
dc.subjectEconomics, Econometrics and Financeen_US
dc.subjectEngineeringen_US
dc.subjectMathematicsen_US
dc.titleHow to Train A-to-B and B-to-A Neural Networks So That the Resulting Transformations Are (Almost) Exact Inversesen_US
dc.typeBook Seriesen_US
article.title.sourcetitleStudies in Systems, Decision and Controlen_US
article.volume429en_US
article.stream.affiliationsThe University of Texas at El Pasoen_US
article.stream.affiliationsGottfried Wilhelm Leibniz Universität Hannoveren_US
article.stream.affiliationsChiang Mai Universityen_US
Appears in Collections:CMUL: Journal Articles

Files in This Item:
There are no files associated with this item.


Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.