Skip to content
This repository was archived by the owner on Jan 10, 2025. It is now read-only.

Commit f9bcc35

Browse files
Ilia Lazarevkephircheek
authored andcommitted
FIX typos from Tim
1 parent 6d3757a commit f9bcc35

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

manuscript.tex

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
\begin{abstract}
4646
Unsupervised machine learning is one of the main techniques employed in artificial intelligence.
4747
Quantum computers offer opportunities to speed up such machine learning techniques.
48-
Here, we propose a realization of quantum assisted unsupervised data clustering using the self-organizing feature map, a type of artificial neural network.
48+
Here, we introduce an algorithm for quantum assisted unsupervised data clustering using the self-organizing feature map, a type of artificial neural network.
4949
We make a proof-of-concept realization of one of the central components on the IBM Q Experience
5050
and show that it allows us to reduce the number of calculations in a number of clusters.
5151
We compare the results with the classical algorithm on a toy example of unsupervised text clustering.
@@ -83,11 +83,11 @@ \section{Introduction}
8383

8484
At the same time, non-neural-network based hybrid quantum classical algorithms have become a new direction of significant interest \cite{mcclean2016,arute2020,akshay2020}.
8585
Such hybrid algorithms involve quantum circuits used for part of the execution and is typically trained in a classical learning loop.
86-
In particular, the Quantum Approximate Optimization Algorithm was developed to find approximate solutions to combinatorial optimization problems \cite{farhi2014,farhi2016}
86+
In particular, the Quantum Approximate Optimization Algorithm (QAOA) was developed to find approximate solutions to combinatorial optimization problems \cite{farhi2014,farhi2016}
8787
and designed for problems such as MAX-CUT and Grover's algorithm \cite{arute2020,akshay2020,wang2018,jiang2017,huang2019,wecker2016,pagano2019,byrnes2018}.
8888
Another example of a well-known hybrid quantum classical algorithm is the Variational Quantum Eigensolver (VQE) for applications in quantum simulations
8989
\cite{kandala2017,aspuru-guzik2005,lanyon2010,peruzzo2014}.
90-
Currently, it is believed that implementation of quantum neural networks and hybrid quantum classical algorithms can be the main test bed to achieve practical quantum supremacy on noisy intermediate scale quantum (NISQ) devices \cite{preskill2018}.
90+
Currently, it is believed that implementation of quantum neural networks and hybrid quantum classical algorithms can be the main test bed to achieve practical quantum supremacy on Noisy Intermediate Scale Quantum (NISQ) devices \cite{preskill2018}.
9191

9292
In this paper, we develop a hybrid quantum-assisted SOFM (QASOFM)
9393
and apply it to the data clustering problem in an unsupervised manner. The idea is based on the use of the Hamming distance as a distance metric for the training the SOFM that allows, in the quantum case, to reduce the number of distance calculations in the number of clusters and thus to speed up the original classical protocol.
@@ -172,7 +172,7 @@ \subsection{The classical algorithm}
172172
\left(\vec{x}(t) - \vec{w}_{i}(t)\right) .
173173
\end{equation}
174174
%
175-
Here, $\alpha(t)$ is the monotonically decreasing learning rate and $\theta(c, i, t)$ is the neighborhood function (usually taken as a Gaussian or delta function) which defines the vicinity of the BMU labeled by an index $c$, the weights of the neighbors in the vicinity should also be adjusted in the same manner as for the BMU. This expression can be understood in the following way: if a component of the input vector $\vec{x}(t)$ is greater than the corresponding weight $ \vec{w}_{i}(t) $, increase the weight of the BMU and the weights indexed by $i$ and defined with the neighborhood function by a small amount defined by the learning rate $\alpha(t)$; if the input component is smaller than the weight, decrease the weight by a small amount. The larger the difference between the input component and the weight component, the larger the increment (decrement).
175+
Here, $\alpha(t)$ is the monotonically decreasing learning rate and $\theta(c, i, t)$ is the neighborhood function (usually taken as a Gaussian or delta function) which defines the vicinity of the BMU labeled by an index $c$, the weights of the neighbors in the vicinity should also be adjusted in the same manner as for the BMU. This expression can be understood in the following way: if a component of the input vector $\vec{x}(t)$ is greater than the corresponding weight $ \vec{w}_{i}(t) $, increase the weight of the BMU and the weights indexed by $i$ and defined with the neighborhood function by a small amount defined by the learning rate $\alpha(t)$; if the input component is smaller than the weight, decrease the weight by a small amount. The larger the difference between the input component and the weight component, the larger the increment or decrement.
176176

177177
Intuitively, this procedure can be geometrically interpreted as iteratively moving the cluster vectors defined by the corresponding weight $ \vec{w}_{i}(t) $ (blue triangles on Fig.~\ref{fig:sofm_fitting}) in space one at a time in a way that ensures each move is following the current trends inferred from their distances to the input objects defined by $\vec{x}(t)$ (red circles on Fig.~\ref{fig:sofm_fitting}).
178178

@@ -386,7 +386,7 @@ \subsection{Optimized quantum scheme for Hamming distance calculation}
386386
The Hamming distance measured in this way is bounded $0 \leq d_{i,j}^H \leq 1$,
387387
where $0$ indicates that $x_i$ and $y_j$ are identical and $1$ means they are the completely opposite in terms of their pairwise binary coordinates.
388388

389-
The number of controlled gate operations to define the full distance matrix matches the number of controlled gate operations in \cite{trugenberger2001}, where the original algorithm for calculation of the Humming distance was introduced, but the number of remaining gates is reduced compared to \cite{trugenberger2001}, leading to a less deep circuit, which is significant for NISQ devices.
389+
The number of controlled gate operations to define the full distance matrix matches the number of controlled gate operations in Ref. \cite{trugenberger2001}, where the original algorithm for calculation of the Hamming distance was introduced, but the number of remaining gates is reduced compared to Ref. \cite{trugenberger2001}, leading to a less deep circuit, which is significant for NISQ devices.
390390

391391

392392
%In this special case scenario the circuit depth complexity is matching with \cite{schuld2014}.

0 commit comments

Comments
 (0)