Skip to main content Accessibility help
×
Hostname: page-component-5b777bbd6c-cp4x8 Total loading time: 0 Render date: 2025-06-18T19:25:04.832Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  05 June 2025

Grey Ballard
Affiliation:
Wake Forest University, North Carolina
Tamara G. Kolda
Affiliation:
MathSci.ai
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Abdelfattah, A., Costa, T., Dongarra, J., et al. (2021). A set of batched basic linear algebra subprograms and LAPACK routines. ACM Transactions on Mathematical Software 47(3), Article No. 21. doi: 10.1145/3431921.CrossRefGoogle Scholar
Abdi, H. and Williams, L. J. (2010). Principal component analysis. WIREs Computational Statistics 2(4), 433459. doi: 10.1002/wics.101.CrossRefGoogle Scholar
Acar, E., Dunlavy, D.M., Kolda, T. G., and Mørup, M. (2010). Scalable tensor factorizations with missing data. In Proceedings of the 2010 SIAM International Conference on Data Mining (SDM’10), pp. 701712. doi: 10.1137/1.9781611972801.61.CrossRefGoogle Scholar
Acar, E., Dunlavy, D. M., and Kolda, T. G. (2011a). A scalable optimization approach for fitting canonical tensor decompositions. Journal of Chemometrics 25(2), 6786. doi: 10.1002/cem. 1335.CrossRefGoogle Scholar
Acar, E., Dunlavy, D.M., Kolda, T. G., and Mørup, M. (2011b). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems 106(1), 4156. doi: 10 .1016/j.chemolab.2010.08.004.CrossRefGoogle Scholar
Acar, E., Papalexakis, E. E., Gürdeniz, G., et al. (2014). Structure-revealing data fusion. BMC Bioinformatics 15(1). doi: 10.1186/1471-2105-15-239.CrossRefGoogle ScholarPubMed
Ahmadi-Asl, S., Abukhovich, S., Asante-Mensah, M.G., et al. (2021). Randomized algorithms for computation of Tucker decomposition and higher order SVD (HOSVD). IEEE Access 9, 2868428706. doi: 10.1109/access.2021.3058103.CrossRefGoogle Scholar
Anandkumar, A., Ge, R., Hsu, D., Kakade, S.M., and Telgarsky, M. (2014). Tensor decompositions for learning latent variable models. Journal of Machine Learning Research 15(1), 27732832. URL: http://jmlr.org/papers/v15/anandkumar14b.html.Google Scholar
Anderson, E., Bai, Z., Bischof, C., et al. (1999). LAPACK Users' Guide. 3rd ed. Philadelphia: SIAM. doi: 10.1137/1.9780898719604.CrossRefGoogle Scholar
Atkinson, M.D. and Lloyd, S. (1983). The ranks of m × n × (mn — 2) tensors. SIAM Journal on Computing 12(4), 611615. doi: 10.1137/0212041.CrossRefGoogle Scholar
Atkinson, M. D. and Stephens, N. M. (1979). On the maximal multiplicative complexity of a family of bilinear forms. Linear Algebra and Applications 27, 18. doi: 10.1016/0024-3795(79) 90026-0.CrossRefGoogle Scholar
Austin, W., Ballard, G., and Kolda, T. G. (2016). Parallel tensor compression for large-scale scientific data. In Proceedings of the 30th IEEE International Parallel and Distributed Processing Symposium (IPDPS’16), pp. 912922. doi: 10.1109/IPDPS.2016.67.CrossRefGoogle Scholar
Bader, B. W. and Kolda, T. G. (2007). Efficient MATLAB computations with sparse and factored tensors. SIAM Journal on Scientific Computing 30(1), 205231. doi: 10.1137/060676489.CrossRefGoogle Scholar
Bader, B. W., Kolda, T. G., et al. (2023). MATLAB Tensor Toolbox, Version 3.6. URL: www.tensor toolbox.org (accessed July 29, 2024).Google Scholar
Ballard, G. (2024). Decathlon Matrix Data. URL: https://gitlab.com/tensors/matrix_data_decathlon (accessed July 29, 2024).Google Scholar
Ballard, G., Kolda, T. G., and Plantenga, T. (2011). Efficiently computing tensor eigenvalues on a GPU. In Proceedings of the 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum (IPDPSW’11), pp. 13401348. doi: 10.1109/IPDPS.2011.287.CrossRefGoogle Scholar
Ballard, G., Ikenmeyer, C., Landsberg, J. M., and Ryder, N. (2018). The geometry of rank decompositions of matrix multiplication II: 3 × 3 matrices. Journal of Pure and Applied Algebra 223(8), 32053224. DOI:10.1016/j.jpaa.2018.10.014.CrossRefGoogle Scholar
Ballard, G., Klinvex, A., and Kolda, T. G. (2020). TuckerMPI: a parallel C++/MPI software package for large-scale data compression via the Tucker tensor decomposition. ACM Transactions on Mathematical Software 46(2), Article No. 13. doi: 10.1145/3378445.CrossRefGoogle Scholar
Ballard, G., Kolda, T. G., and Lindstrom, P. (2022). Miranda Turbulent Flow Dataset. URL: https://gitlab.com/tensors/tensor_data_miranda_sim (accessed July 29, 2024).Google Scholar
Basu, A., Harris, I. R., Hjort, N. L., and Jones, M. C. (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika 85(3), 549559. doi: 10.1093/biomet/85. 3. 549.CrossRefGoogle Scholar
Battaglino, C., Ballard, G., and Kolda, T. G. (2018). A practical randomized CP tensor decomposition. SIAM Journal on Matrix Analysis and Applications 39(2), 876901. doi: 10.1137/17M1112303.CrossRefGoogle Scholar
Beltrán, C., Breiding, P., and Vannieuwenhoven, N. (2019). Pencil-based algorithms for tensor rank decomposition are not stable. SIAM Journal on Matrix Analysis and Applications 40(2), 739773. doi: 10.1137/18m1200531.CrossRefGoogle Scholar
Benson, A. R. and Ballard, G. (2015). A framework for practical parallel fast matrix multiplication. In Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP’15), pp. 4253. doi: 10.1145/2688500.2688513.CrossRefGoogle Scholar
Bergqvist, G. (2013). Exact probabilities for typical ranks of 2 × 2 × 2 and 3 × 3 × 2 tensors. Linear Algebra and its Applications 438(2), 663667. doi: 10.1016/j.laa.2011.02.041.CrossRefGoogle Scholar
Bergqvist, G. and Forrester, P. (2011). Rank probabilities for real random N × N × 2 tensors. Electronic Communications in Probability 16. doi: 10.1214/ecp.v16-1655.CrossRefGoogle Scholar
Bertsekas, D. P. (2016). Nonlinear Programming. 3rd ed. Belmont, MA: Athena Scientific.Google Scholar
Beylkin, G. and Mohlenkamp, M. J. (2002). Numerical operator calculus in higher dimensions. Proceedings of the National Academy of Sciences 99(16), 1024610251. doi: 10.1073/pnas.112329799.CrossRefGoogle ScholarPubMed
Beylkin, G. and Mohlenkamp, M. J. (2005). Algorithms for numerical analysis in high dimensions. SIAM Journal on Scientific Computing 26(6), 21332159. DOI:10.1137/040604959.CrossRefGoogle Scholar
Bini, D. (1980). Relations between exact and approximate bilinear algorithms: applications. CALCOLO 17(1), 8797. doi: 10.1007/BF02575865.CrossRefGoogle Scholar
Bini, D., Capovani, M., Lotti, G., and Romani, F. (1979). O(n2.7799) complexity for n × n approximate matrix multiplication. Information Processing Letters 8(5), 234235. DOI:10. 1016/0020-0190(79)90113-3.CrossRefGoogle Scholar
Bini, D., Lotti, G., and Romani, F. (1980). Approximate solutions for the bilinear form computational problem. SIAM Journal on Computing 9(4), 692697. doi: 10.1137/0209053.CrossRefGoogle Scholar
Blackford, L. S., Demmel, J., Dongarra, J., et al. (2002). An updated set of Basic Linear Algebra Subroutines (BLAS). ACM Transactions on Mathematical Software 28(2). doi: 10. 1145/567806.567807.Google Scholar
Bläser, M. (2003). On the complexity of the multiplication of matrices in small formats. Journal of Complexity 19(1), 4360. doi: 10.1016/S0885-0 64X(02)00007-9.CrossRefGoogle Scholar
Brachat, J., Comon, P., Mourrain, B., and Tsigaridas, E. (2010). Symmetric tensor decomposition. Linear Algebra and its Applications 433(11–12), 18511872. doi: 10.1016/j.laa.2010. 06. 046.CrossRefGoogle Scholar
Brent, R. P. (1970). Algorithms for Matrix Multiplication. Tech. rep. STAN-CS-70-157. Stanford University, Department of Computer Science. URL: http://i.stanford.edu/pub/cstr/reports/cs/tr/70/157/CS-TR-70-157.pdf (accessed July 29, 2024).CrossRefGoogle Scholar
Bro, R. and Andersson, C. A. (1998). Improving the speed of multi-way algorithms: Part II. Compression. Chemometrics and Intelligent Laboratory Systems 42(1–2), 105113. doi: 10.1016/S0169-7439(98)00011-2.CrossRefGoogle Scholar
Bro, R. and De Jong, S. (1997). A fast non-negativity-constrained least squares algorithm. Journal of Chemometrics 11(5), 393401. doi: 10.1002/(SICI)1099-128X(199709/10)11: 5<393::AID-CEM483>3.0.CO;2-L.3.0.CO;2-L>CrossRefGoogle Scholar
Bro, R. and Kiers, H. A. L. (2003). A new efficient method for determining the number of components in PARAFAC models. Journal of Chemometrics 17(5), 274286. doi: 10.1002/cem.801.CrossRefGoogle Scholar
Bro, R., Acar, E., and Kolda, T. G. (2008). Resolving the sign ambiguity in the singular value decomposition. Journal of Chemometrics 22(2), 135140. doi: 10.1002/cem.1122.CrossRefGoogle Scholar
Bro, R., Harshman, R. A., Sidiropoulos, N.D., and Lundy, M.E. (2009). Modeling multi-way data with linearly dependent loadings. Journal of Chemometrics 23(7–8), 324340. doi: 10.1002/cem.1206.CrossRefGoogle Scholar
Bro, R., Leardi, R., and Johnsen, L. G. (2013). Solving the sign indeterminacy for multiway models. Journal of Chemometrics 27(3-4), 7075. doi: 10.1002/cem.2493.CrossRefGoogle Scholar
Brookes, M. (2020). The Matrix Reference Manual, Calculus Section. URL: www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html (accessed July 29, 2024).Google Scholar
Buluç, A. and Gilbert, J. R. (2008). On the representation and multiplication of hypersparse matrices. In IEEE International Symposium on Parallel and Distributed Processing (IPDPS’08). doi: 10. 1109/ipdps.2008.4536313.Google Scholar
Byrd, R.H., Lu, P., Nocedal, J., and Zhu, C. (1995). A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing 16(5), 11901208. doi: 10.1137/0916069.CrossRefGoogle Scholar
Cabot, W. H. and Cook, A. W. (2006). Reynolds number effects on Rayleigh–Taylor instability with possible implications for type Ia supernovae. Nature Physics 2(8), 562568. doi: 10.1038/nphys361.CrossRefGoogle Scholar
Carroll, J. D. and Chang, J. J. (1970). Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart–Young” decomposition. Psychometrika 35, 283319. doi: 10.1007/BF02310791.CrossRefGoogle Scholar
Carroll, J.D., Pruzansky, S., and Kruskal, J. B. (1980). CANDELINC: a general approach to multidimensional analysis of many-way arrays with linear constraints on parameters. Psychometrika 45(1), 324. doi: 10.1007/BF02293596.CrossRefGoogle Scholar
Cartwright, D. and Sturmfels, B. (2013). The number of eigenvalues of a tensor. Linear Algebra and its Applications 438(2), 942952. doi: 10.1016/j.laa.2011.05.040.CrossRefGoogle Scholar
Cattell, R. B. (1944). Parallel proportional profiles and other principles for determining the choice of factors by rotation. Psychometrika 9(4), 267283. doi: 10.1007/BF0228 87 39.CrossRefGoogle Scholar
Cattell, R. B. (1952). The three basic factor-analytic research designs: their interrelations and derivatives. Psychological Bulletin 49, 499452.CrossRefGoogle ScholarPubMed
Chang, K. C., Pearson, K., and Zhang, T. (2009). On eigenvalue problems of real symmetric tensors. Journal of Mathematical Analysis and Applications 350(1), 416422. doi: 10.1016/j. jmaa .2008.09.067.CrossRefGoogle Scholar
Chen, D. and Plemmons, R. J. (2009). Nonnegativity constraints in numerical analysis. In The Birth of Numerical Analysis. Singapore: World Scientific, pp. 109139. doi: 10.1142/97898128 36267_0008.CrossRefGoogle Scholar
Cheng, D., Peng, R., Perros, I., and Liu, Y. (2016). SPALS: fast alternating least squares via implicit leverage scores sampling. In Advances in Neural Information Processing Systems (NeurIPS'16). URL: https://proceedings.neurips.cc/paper_files/paper/2016/file/f4f6dce2f3a0f9dada0c2b5b66452017-Paper.pdf.Google Scholar
Chi, E. C. and Kolda, T. G. (2012). On tensors, sparsity, and nonnegative factorizations. SIAM Journal on Matrix Analysis and Applications 33(4), 12721299. doi: 10.1137/110859063.CrossRefGoogle Scholar
Choulakian, V. (2010). Some numerical results on the rank of generic three-way arrays over R. SIAM Journal on Matrix Analysis and Applications 31(4), 15411551. doi: 10.1137/08073531X.CrossRefGoogle Scholar
Cichocki, A. and Amari, S.-i. (2010). Families of alpha- beta- and gamma- divergences: flexible and robust measures of similarities. Entropy 12(6), 15321568. doi: 10.3390/e12061532.CrossRefGoogle Scholar
Cichocki, A., Zdunek, R., Choi, S., Plemmons, R., and Amari, S.-I. (2007). Non-negative tensor factorization using alpha and beta divergences. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP’07). doi: 10.1109/ICASSP.2007. 367106.CrossRefGoogle Scholar
Colley, S. J. (2006). Vector Calculus. 3rd ed. Hoboken: Prentice Hall.Google Scholar
Cui, C.-F., Dai, Y.-H., and Nie, J. (2014). All real eigenvalues of symmetric tensors. SIAM Journal on Matrix Analysis and Applications 35(4), 15821601. doi: 10.1137/140962292.CrossRefGoogle Scholar
De Lathauwer, L. (2008a). Decompositions of a higher-order tensor in block terms – part I: lemmas for partitioned matrices. SIAM Journal on Matrix Analysis and Applications 30(3), 10221032. doi: 10.1137/060661685.CrossRefGoogle Scholar
De Lathauwer, L. (2008b). Decompositions of a higher-order tensor in block terms – part II: definitions and uniqueness. SIAM Journal on Matrix Analysis and Applications 30(3), 10331066. doi: 10.1137/070690729.CrossRefGoogle Scholar
De Lathauwer, L. and Nion, D. (2008). Decompositions of a higher-order tensor in block terms – part III: alternating least squares algorithms. SIAM Journal on Matrix Analysis and Applications 30(3), 10671083. doi: 10.1137/070690730.CrossRefGoogle Scholar
De Lathauwer, L., De Moor, B., and Vandewalle, J. (2000a). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications 21(4), 12531278. doi: 10 . 1137/ S0895479896305696.CrossRefGoogle Scholar
De Lathauwer, L., De Moor, B., and Vandewalle, J. (2000b). On the best rank-1 and rank-(R1, R2, ..., RN) approximation of higher-order tensors. SIAM Journal on Matrix Analysis and Applications 21(4), 13241342. doi: 10.1137/S0895479898346995.CrossRefGoogle Scholar
de Silva, V. and Lim, L.-H. (2008). Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM Journal on Matrix Analysis and Applications 30(3), 10841127. doi: 10.1137/06066518X.CrossRefGoogle Scholar
Dembo, R. S. and Steihaug, T. (1983). Truncated-Newton algorithms for large-scale unconstrained optimization. Mathematical Programming 26(2), 190212. doi: 10.1007/BF02592055.CrossRefGoogle Scholar
Demmel, J. (1997). Applied Numerical Linear Algebra. Philadelphia: SIAM.CrossRefGoogle Scholar
Domanov, I. and De Lathauwer, L. (2014). Canonical polyadic decomposition of third-order tensors: reduction to generalized eigenvalue decomposition. SIAM Journal on Matrix Analysis and Applications 35(2), 636660. doi: 10.1137/130916084.CrossRefGoogle Scholar
Dunlavy, D. M., Johnson, N., et al. (2022). pyttb: Python Tensor Toolbox. URL: https://gith ub.com/sandialabs/pyttb (accessed June 22, 2023).Google Scholar
Eckhart, C. and Young, G. (1936). The approximation of one matrix by another of lower rank. Psy-chometrika 1(3), 211218. doi: 10.1007/BF02288367.Google Scholar
Edmonds, J. and Karp, R. M. (1972). Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM 19(2), 248264. doi: 10.1145/321694.321699.CrossRefGoogle Scholar
Elad, M. (2010). Sparse and Redundant Representations. New York: Springer.CrossRefGoogle Scholar
Eldén, L. and Savas, B. (2009). A Newton–Grassmann method for computing the best multilinear rank-(r1, r2, r3) approximation of a tensor. SIAM Journal on Matrix Analysis and Applications 31(2), 248271. doi: 10.1137/070688316.CrossRefGoogle Scholar
Eswar, S., Hayashi, K., Ballard, G., et al. (2021). PLANC: parallel low-rank approximation with nonnegativity constraints. ACM Transactions on Mathematical Software 47(3). doi: 10.1145/ 3432185.CrossRefGoogle Scholar
Evert, E., Vandecappelle, M., and De Lathauwer, L. (2022). Canonical polyadic decomposition via the generalized Schur decomposition. IEEE Signal Processing Letters 29, 937941. doi: 10.1109/lsp.2022.3156870.CrossRefGoogle Scholar
Fackler, P. L. (2019). Algorithm 993: efficient computation with Kronecker products. ACM Transactions on Mathematical Software 45(2), Article No. 22. doi: 10.1145/3291041.CrossRefGoogle Scholar
Fawzi, A., Balog, M., Huang, A., et al. (2022). Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 4753. doi: 10.1038/s41586-022-05172-4.CrossRefGoogle ScholarPubMed
Févotte, C. and Idier, J. (2011). Algorithms for nonnegative matrix factorization with the ß-divergence. Neural Computation 23(9), 24212456. doi: 10.1162/NECO_a_00168.CrossRefGoogle Scholar
Friedland, S. (2012). On the generic and typical ranks of 3-tensors. Linear Algebra and Applications 436(3), 478497. doi: 10.1016/j.laa.2011.05.008.CrossRefGoogle Scholar
Friedlander, M. P. and Hatz, K. (2008). Computing nonnegative tensor factorizations. Computational Optimization and Applications 23(4), 631647. doi: 10.1080/10556780801996244.Google Scholar
Gillis, N. (2021). Nonnegative Matrix Factorization. Philadelphia: SIAM. doi: 10.1137/1.9781 611976410.Google Scholar
Golub, G. and Kahan, W. (1965). Calculating the singular values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics Series B Numerical Analysis 2(2), 205224. doi: 10.1137/0702016.CrossRefGoogle Scholar
Golub, G. H. and Van Loan, C.F. (2013). Matrix Computations. 4th ed. Baltimore: Johns Hopkins University Press.CrossRefGoogle Scholar
Grasedyck, L. (2010). Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix Analysis and Applications 31(4), 20292054. doi: 10.1137/090764189.CrossRefGoogle Scholar
Grasedyck, L., Kressner, D., and Tobler, C. (2013). A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36(1), 5378. doi: 10.1002/gamm.201310004.CrossRefGoogle Scholar
Gu, M. (2015). Subspace iteration randomization and singular value problems. SIAM Journal on Scientific Computing 37(3), A1139A1173. doi: 10.1137/130938700.CrossRefGoogle Scholar
Hackbusch, W. (2014). Numerical tensor calculus. Acta Numerica 23, 651742. doi: 10.1017/S0962492914000087.CrossRefGoogle Scholar
Hackbusch, W. (2019). Tensor Spaces and Numerical Tensor Calculus. 2nd ed. Cham: Springer. doi: 10.1007/978-3-030-35554-8.CrossRefGoogle Scholar
Hackbusch, W. and Kühn, S. (2009). A new scheme for the tensor representation. Journal of Fourier Analysis and Applications 15(5), 706722. doi: 10.1007/s00041-009-9094-9.CrossRefGoogle Scholar
Halko, N., Martinsson, P. G., and Tropp, J. A. (2011). Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review 53(2), 217288. doi: 10.1137/090771806.CrossRefGoogle Scholar
Hansen, S., Plantenga, T., and Kolda, T. G. (2015). Newton-based optimization for Kullback–Leibler nonnegative tensor factorizations. Optimization Methods and Software 30(5), 10021029. doi: 10.1080/10556788.2015.1009977.CrossRefGoogle Scholar
Harshman, R. A. (1970). Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis. UCLA Working Papers in Phonetics 16, 184. URL: www.psychology.uwo.ca/faculty/harshman/wpppfac0.pdf (accessed July 29, 2024).Google Scholar
Harshman, R. A. (1972). Determination and proof of minimum uniqueness conditions for PARAFAC1. UCLA working papers in phonetics 22, 111117. URL: www.psychology.uwo.ca/faculty/harshman/wpppfac1.pdf (accessed July 29, 2024).Google Scholar
Håstad, J. (1990). Tensor rank is NP-complete. Journal of Algorithms 11(4), 644654. doi: 10.1016/0196-6774(90)90014-6.CrossRefGoogle Scholar
Hastie, T., Tibshrirani, R., and Friedman, J. (2009). The Elements of Statistical Learning. 2nd ed. New York: Springer. doi: 10.1007/978-0-387-84858-7.CrossRefGoogle Scholar
Helal, A. E., Laukemann, J., Checconi, F., et al. (2021). ALTO: adaptive linearized storage of sparse tensors. In Proceedings of the 35th ACM International Conference on Supercomputing (ICS’21), pp. 404416. doi: 10.1145/3447818.3461703.CrossRefGoogle Scholar
Higham, N. J. (1992). Stability of a method for multiplying complex matrices with three real matrix multiplications. SIAM Journal on Matrix Analysis and Applications 13(3), 681687. doi: 10.1137/0613043.CrossRefGoogle Scholar
Higham, N. J. (2002). Accuracy and Stability of Numerical Algorithms. 2nd ed. Philadelphia: SIAM.CrossRefGoogle Scholar
Hillar, C. J. and Lim, L.-H. (2013). Most tensor problems are NP-hard. Journal of the ACM 60(6), 139. doi: 10.1145/2512329.CrossRefGoogle Scholar
Hitchcock, F. L. (1927). The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics 6(1), 164189. doi: 10.1002/sapm192761164.CrossRefGoogle Scholar
Hong, D., Kolda, T. G., and Duersch, J. A. (2020). Generalized canonical polyadic tensor decomposition. SIAM Review 62(1), 133163. doi: 10.1137/18M1203626.CrossRefGoogle Scholar
Hopcroft, J. E. and Kerr, L. R. (1971). On minimizing the number of multiplications necessary for matrix multiplication. SIAM Journal on Applied Mathematics 20(1), 3036. doi: 10.1137/ 0120004.CrossRefGoogle Scholar
Hopcroft, J. and Musinski, J. (1973). Duality applied to the complexity of matrix multiplication and other bilinear forms. SIAM Journal on Computing 2(3), 159173. doi: 10.1137/0202013.CrossRefGoogle Scholar
Horn, R. A. and Johnson, C. R. (1985). Matrix Analysis. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Horn, R. A. and Johnson, C. R. (1991). Topics in Matrix Analysis. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Horn, R. A. and Yang, Z. (2020). Rank of a Hadamard product. Linear Algebra and its Applications 591, 8798. doi: 10.1016/j.laa.2020.01.005.CrossRefGoogle Scholar
Huang, J., Smith, T. M., Henry, G. M., and van de Geijn, R. A. (2016). Strassen's algorithm reloaded. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC’16). doi: 10.5555/3014904.3014983.CrossRefGoogle Scholar
Huber, P. J. (1964). Robust estimation of a location parameter. Annals of Statistics 53(1), 73101. doi: 10.1214/aoms/1177703732.CrossRefGoogle Scholar
JáJá, J. (1979). Optimal evaluation of pairs of bilinear forms. SIAM Journal on Computing 8(3), 443462. doi: 10.1137/0208037.CrossRefGoogle Scholar
Jin, R., Kolda, T. G., and Ward, R. (2020). Faster Johnson–Lindenstrauss transforms via Kronecker products. Information and Inference: A Journal of the IMA. doi: 10.1093/imaiai/iaaa02 8.CrossRefGoogle Scholar
Johnson, R. W. and McLoughlin, A. M. (1986). Noncommutative bilinear algorithms for 3 × 3 matrix multiplication. SIAM Journal on Computing 15(2), 595603. doi: 10.1137/0215043.CrossRefGoogle Scholar
Jolliffe, I. T. (2002). Principal Component Analysis. 2nd ed. Berlin: Springer-Verlag. doi: 10.1007/b98835.Google Scholar
Kapteyn, A., Neudecker, H., and Wansbeek, T. (1986). An approach to n-mode components analysis. Psychometrika 51(2), 269275. doi: 10.1007/BF02293984.CrossRefGoogle Scholar
Karstadt, E. and Schwartz, O. (2020). Matrix multiplication, a little faster. Journal of the ACM 67(1). doi: 10.1145/3364504.CrossRefGoogle Scholar
Kauers, M. and Moosbauer, J. (2023). Flip graphs for matrix multiplication. In Proceedings of the 2023 International Symposium on Symbolic and Algebraic Computation. doi: 10.1145/3597 066.3597120.CrossRefGoogle Scholar
Kaya, O. and Robert, Y. (2019). Computing dense tensor decompositions with optimal dimension trees. Algorithmica 81, 20922121. doi: 10.1007/s00453-018-0525-3.CrossRefGoogle Scholar
Kaya, O. and Uçar, B. (2015). Scalable sparse tensor decompositions in distributed memory systems. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC’15). doi: 10.1145/2807591.2807624.CrossRefGoogle Scholar
Kaya, O. and Uçar, B. (2016). High-performance parallel algorithms for the Tucker decomposition of higher order sparse tensors. In 45th International Conference on Parallel Processing (ICPP’16). doi: 10.1109/ICPP.2016.19.CrossRefGoogle Scholar
Kiers, H.A. L. (1997). Weighted least squares fitting using ordinary least squares algorithms. Psychometrika 62(2), 215266. doi: 10.1007/BF02295279.CrossRefGoogle Scholar
Kiers, H. A. L. (2000). Towards a standardized notation and terminology in multiway analysis. Journal of Chemometrics 14(3), 105122. doi: 10.1002 / 1099 – 128X(200005 / 06 ) 14 : 3<105::AID-CEM582>3.0.CO;2-I.3.0.CO;2-I>CrossRefGoogle Scholar
Kilmer, M. E. and Martin, C. D. (2011). Factorization strategies for third-order tensors. Linear Algebra and its Applications 435(3), 641658. doi: 10.1016/j.laa.2010.09.020.CrossRefGoogle Scholar
Kilmer, M. E., Horesh, L., Avron, H., and Newman, E. (2021). Tensor–tensor algebra for optimal representation and compression of multiway data. Proceedings of the National Academy of Sciences 118(28), e2015851118. doi: 10.1073/pnas.2015851118.CrossRefGoogle ScholarPubMed
Kim, J., He, Y., and Park, H. (2014). Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework. Journal of Global Optimization 58(2), 285319. doi: 10.1007/s10898-013-0035-4.CrossRefGoogle Scholar
Kingma, D. P. and Ba, J. (2015). Adam: A Method for Stochastic Optimization. Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015. arXiv: 1412.6980v9.Google Scholar
Kofidis, E. and Regalia, P. A. (2002). On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM Journal on Matrix Analysis and Applications 23(3), 863884. doi: 10.1137/S0895479801387413.CrossRefGoogle Scholar
Kolda, T. G. (2001). Orthogonal tensor decompositions. SIAM Journal on Matrix Analysis and Applications 23(1), 243255. doi: 10.1137/S0895479800368354.CrossRefGoogle Scholar
Kolda, T. G. (2003). A counterexample to the possibility of an extension of the Eckart–Young low-rank approximation theorem for the orthogonal rank tensor decomposition. SIAM Journal on Matrix Analysis and Applications 24(3), 762767. doi: 10.1137/S0895479801394465.CrossRefGoogle Scholar
Kolda, T. G. (2015a). Numerical optimization for symmetric tensor decomposition. Mathematical Programming B 151(1), 225248. doi: 10.1007/s10107-015-0895-0.CrossRefGoogle Scholar
Kolda, T. G. (2015b). Symmetric Orthogonal Tensor Decomposition Is Trivial. arXiv: 1503.01375.Google Scholar
Kolda, T.G. (2021a). EEM Tensor Data. URL: https://gitlab.com/tensors/tensor_data_eem (accessed July 29, 2024).Google Scholar
Kolda, T. G. (2021b). Will the Real Jennrich’s Algorithm Please Stand Up? URL: www.mathsci.ai/post/jennrich/ (accessed July 29, 2024).Google Scholar
Kolda, T. G. (2022a). Monkey BMI Tensor Dataset. URL: https://gitlab.com/tensors/tensor_data_monkey_bmi (accessed July 29, 2024).Google Scholar
Kolda, T. G. (2022b). New Chicago Crime Tensor Dataset. URL: https://gitlab.com/tensors/tensor_data_miranda_sim (accessed July 29, 2024).Google Scholar
Kolda, T.G. and Bader, B.W. (2009). Tensor decompositions and applications. SIAM Review 51(3), 455500. doi: 10.1137/07070111X.CrossRefGoogle Scholar
Kolda, T. and Duersch, J. (2017). Sparse Versus Scarce. URL: www.mathsci.ai/post/sparse-versus-scarce/ (accessed July 29, 2024).Google Scholar
Kolda, T.G. and Mayo, J.R. (2011). Shifted power method for computing tensor eigenpairs. SIAM Journal on Matrix Analysis and Applications 32(4), 10951124. doi: 10.1137/100801482.CrossRefGoogle Scholar
Kolda, T. G. and Mayo, J. R. (2014). An adaptive shifted power method for computing generalized tensor eigenpairs. SIAM Journal on Matrix Analysis and Applications 35(4), 15631581. doi: 10.1137/140951758.CrossRefGoogle Scholar
Kolda, T.G. and Sun, J. (2008). Scalable tensor decompositions for multi-aspect data mining. In Proceedings of the 8th IEEE International Conference on Data Mining (ICDM’08), pp. 363372. doi: 10.1109/ICDM.2008.89.CrossRefGoogle Scholar
Kolda, T. G., Bader, B. W., and Kenny, J. P (2005). Higher-order web link analysis using multilinear algebra. In Proceedings of the 5th IEEE International Conference on Data Mining (ICDM'05), pp. 242249. doi: 10.1109/ICDM.2005.77.CrossRefGoogle Scholar
Kroonenberg, P. M. and De Leeuw, J. (1980). Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika 45(1), 6997. doi: 10.10.07/BF02293599.CrossRefGoogle Scholar
Kruskal, J. B. (1977). Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra and its Applications 18(2), 95138. doi: 10.1016/0024-3795(77)90069-6.CrossRefGoogle Scholar
Kruskal, J.B. (1983). “Statement of some current results about three-way arrays.” Unpublished manuscript, AT&T Bell Laboratories, Murray Hill, NJ. URL: http://three-mode.leidenuniv.nl/pdf/k/kruskal1983.pdf.Google Scholar
Kruskal, J.B. (1989). Rank, decomposition, and uniqueness for 3-way and N-way arrays. In Multiway Data Analysis. Ed. by Coppi, R. and Bolasco, S.. Amsterdam: North-Holland, pp. 718. URL: www.psychology.uwo.ca/faculty/harshman/jbkrank.pdf (accessed July 29, 2024).Google Scholar
Laderman, J.D. (1976). A noncommutative algorithm for multiplying 3 × 3 matrices using 23 multiplications. Bulletin of the American Mathematical Society 82(1), 126128. URL: www.ams.org/bull/1976-82-01/S0002-9904-1976-13988-2/S0002-9904-1976-13988-2.pdf (accessed July 29, 2024).CrossRefGoogle Scholar
Landsberg, J.M. (2006). The border rank of the multiplication of 2 × 2 matrices is seven. Journal of the American Mathematical Society 19, 447459. doi: 10.1090/S0894-0347-05- 00506-0.CrossRefGoogle Scholar
Larsen, B. W. and Kolda, T. G. (2022). Practical leverage-based sampling for low-rank tensor decomposition. SIAM Journal on Matrix Analysis and Applications 43(3), 14881517. doi: 10.1137/ 21m1441754.CrossRefGoogle Scholar
Lawson, C. L. and Hanson, R. J. (1974). Solving Least Squares Problems. Hoboken: Prentice-Hall.Google Scholar
Lee, D.D. and Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature 401, 788791. doi: 10.1038/44565.CrossRefGoogle Scholar
Lee, D. D. and Seung, H. S. (2001). Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems (NIPS'00). Vol. 13, pp. 556562. URL: https://proceedings.neurips.cc/paper_files/paper/2000/file/f9d1152547c0b de01830b7e8bd60024c-Paper.pdf (accessed July 29, 2024).Google Scholar
Leurgans, S.E., Ross, R. T., and Abel, R. B. (1993). A decomposition for three-way arrays. SIAM Journal on Matrix Analysis and Applications 14(4), 10641083. doi: 10.1137/0614071.CrossRefGoogle Scholar
Li, J., Battaglino, C., Perros, I., Sun, J., and Vuduc, R. (2015). An input-adaptive and in-place approach to dense tensor-times-matrix multiply. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC'15). doi: 10.1145/ 2807591.2807671.Google Scholar
Li, J., Choi, J., Perros, I., Sun, J., and Vuduc, R. (2017). Model-driven sparse CP decomposition for higher-order tensors. In IEEE International Parallel and Distributed Processing Symposium (IPDPS'17), pp. 10481057. doi: 10.1109/ipdps.2017.80.CrossRefGoogle Scholar
Li, J., Sun, J., and Vuduc, R. (2018). HiCOO: hierarchical storage of sparse tensors. In International Conference for High Performance Computing, Networking, Storage and Analysis (SC'18), pp. 238252. doi: 10.1109/SC.2018.00022.CrossRefGoogle Scholar
Lim, L.-H. (2005). Singular values and eigenvalues of tensors: a variational approach. In Proceedings of the IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP'05), pp. 129132. doi: 10.1109/CAMAP.2005.1574201.CrossRefGoogle Scholar
Lipshitz, B., Ballard, G., Demmel, J., and Schwartz, O. (2012). Communication-avoiding parallel Strassen: implementation and performance. In International Conference for High Performance Computing, Networking, Storage and Analysis (SC’12). doi: 10.1109/sc.2012.33.CrossRefGoogle Scholar
Liu, X. and Sidiropoulos, N.D. (2001). Cramér-Rao lower bounds for low-rank decomposition of multidimensional arrays. IEEE Transactions on Signal Processing 49(9), 20742086. doi: 10.1109/78.942635.Google Scholar
Magnus, J. R. and Neudecker, H. (1979). The commutation matrix: some properties and applications. The Annals of Statistics 7(2), 381394. doi: 10.1214/aos/1176344621.CrossRefGoogle Scholar
Malik, O. A. and Becker, S. (2018). Low-rank Tucker decomposition of large tensors using TensorSketch. In Advances in Neural Information Processing Systems (NeurIPS'18), pp. 1011610126. URL: https://proceedings.neurips.cc/paper_files/paper/2018/file/45a766fa266ea2ebeb6680fa139d2a3d-Paper.pdf (accessed July 29, 2024).Google Scholar
Malik, O. A. and Becker, S. (2020). Guarantees for the Kronecker fast Johnson–Lindenstrauss transform using a coherence and sampling argument. Linear Algebra and its Applications 602, 120137. doi: 10.1016/j.laa.2020.05.004.CrossRefGoogle Scholar
Mihoko, M. and Eguchi, S. (2002). Robust blind source separation by beta divergence. Neural Computation 14(8), 18591886. doi: 10.1162/089976602760128045.CrossRefGoogle ScholarPubMed
Minster, R., Viviano, I., Liu, X., and Ballard, G. (2023). CP decomposition for tensors via alternating least squares with QR decomposition. Numerical Linear Algebra with Applications, e2511. doi: 10.1002/nla.2511.CrossRefGoogle Scholar
Minster, R., Li, Z., and Ballard, G. (2024). Parallel randomized Tucker decomposition algorithms. SIAM Journal on Scientific Computing 46(2), A1186A1213. doi: 10.1137/22m1540363.CrossRefGoogle Scholar
Möcks, J. (1988). Topographic components model for event-related potentials and some biophysical considerations. IEEE Transactions on Biomedical Engineering 35(6), 482484. doi: 10.1109/ 10.2119.CrossRefGoogle ScholarPubMed
Mørup, M., Hansen, L. K., and Arnfred, S. M. (2008). Algorithms for sparse nonnegative Tucker decompositions. Neural Computation 20(8), 21122131. doi: 10.1162/neco.2008.11-06-407.CrossRefGoogle ScholarPubMed
Nesterov, Y. (2012). Gradient methods for minimizing composite functions. Mathematical Programming 140(1), 125161. doi: 10.1007/s10107-012-0629-5.CrossRefGoogle Scholar
Nocedal, J. (1980). Updating quasi-Newton matrices with limited storage. Mathematics of Computation 35(151), 773782. doi: 10.2307/2006193.CrossRefGoogle Scholar
Nocedal, J. and Wright, S.J. (2006). Numerical Optimization. 2nd ed. New York: Springer. doi: 10.1007/978-0-387-40065-5.Google Scholar
Oseledets, I.V. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing 33(5), 22952317. doi: 10.1137/09075228 6.CrossRefGoogle Scholar
Oseledets, I. V. and Tyrtyshnikov, E. E. (2009). Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM Journal on Scientific Computing 31(5), 37443759. doi: 10.1137/090748330.CrossRefGoogle Scholar
Oseledets, I. and Tyrtyshnikov, E. (2010). TT-cross approximation for multidimensional arrays. Linear Algebra and its Applications 432(1), 7088. doi: 10.1016/j.laa.2009.07.024.CrossRefGoogle Scholar
Paatero, P. (1997). A weighted non-negative least squares algorithm for three-way “PARAFAC” factor analysis. Chemometrics and Intelligent Laboratory Systems 38(2), 223242. doi: 10.1016/ S0169-7439(97)00031-2.CrossRefGoogle Scholar
Paatero, P. (1999). The multilinear engine: a table-driven, least squares program for solving multilinear problems, including the n-way parallel factor analysis model. Journal of Computational and Graphical Statistics 8(4), 854888. doi: 10.1080/10618600.1999.10474853.Google Scholar
Paatero, P. (2000). Construction and analysis of degenerate PARAFAC models. Journal of Chemometrics 14(3), 285299. doi: 10.1002/1099-128X(200005/06) 14 : 3<285 ::AID- CEM584>3.0.CO;2-1.3.0.CO;2-1>CrossRefGoogle Scholar
Paatero, P. and Tapper, U. (1994). Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values. Environmetrics 5(2), 111126. doi: 10.1002/env.3170050203.CrossRefGoogle Scholar
Paige, C. C. and Saunders, M. A. (1982). LSQR: an algorithm for sparse linear equations and sparse least squares. ACM Transactions on Mathematical Software 8(1), 4371. doi: 10.1145/355984.355989.CrossRefGoogle Scholar
Phan, A. H. and Cichocki, A. (2008). Fast and efficient algorithms for nonnegative Tucker decomposition. In Advances in Neural Networks (ISNN'08). New York: Springer, pp. 772782. doi: 10.1007/978-3-540-87734-9_88.Google Scholar
Phan, A. H. and Cichocki, A. (2011). Extended HALS algorithm for nonnegative Tucker decomposition and its applications for multiway analysis and classification. Neurocomputing 74(11), 19561969. doi: 10.1016/j.neucom.2010.06.031.CrossRefGoogle Scholar
Phan, A. H., Tichavský, P., and Cichocki, A. (2011). Fast damped Gauss–Newton algorithm for sparse and nonnegative tensor factorization. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’11), pp. 19881991. doi: 10.1109/ICAS SP.2011.5946900.CrossRefGoogle Scholar
Phan, A.-H., Tichavský, P., and Cichocki, A. (2013a). Fast alternating LS algorithms for high order CANDECOMP/PARAFAC tensor factorizations. IEEE Transactions on Signal Processing 61(19), 48344846. doi: 10.1109/TSP.2013.2269903.CrossRefGoogle Scholar
Phan, A.-H., P., Tichavský, and Cichocki, A. (2013b). Low complexity damped Gauss–Newton algorithms for CANDECOMP/PARAFAC. SIAM Journal on Matrix Analysis and Applications 34(1), 126147. doi: 10.1137/100808034.CrossRefGoogle Scholar
Phipps, E. and Kolda, T. G. (2019). Software for sparse tensor decomposition on emerging computing architectures. SIAM Journal on Scientific Computing 41(3), C269C290. doi: 10.1137/18M1 210691.CrossRefGoogle Scholar
Qi, L. (2005). Eigenvalues of a real supersymmetric tensor. Journal of Symbolic Computation 40, 13021324. doi: 10.1016/j.jsc.2005.05.007.CrossRefGoogle Scholar
Regalia, P A. and Kofidis, E. (2003). Monotonic convergence of fixed-point algorithms for ICA. IEEE Transactions on Neural Networks 14(4), 943949. doi: 10.1109/TNN.2003.813843.CrossRefGoogle ScholarPubMed
Robbins, H. and Monro, S. (1951). A stochastic approximation method. The Annals of Mathematical Statistics 22(3), 400407. URL: www.jstor.org/stable/2236626 (accessed July 29, 2024).CrossRefGoogle Scholar
Robeva, E. (2014). Orthogonal Decomposition of Symmetric Tensors. eprint: 1409.6685.Google Scholar
Royer, J.-P., Thirion-Moreau, N., and Comon, P (2011). Computing the polyadic decomposition of nonnegative third order tensors. Signal Processing 91(9), 21592171. doi: 10.1016/j.sigpro.2011.03.006.CrossRefGoogle Scholar
Sanchez, E. and Kowalski, B. R. (1990). Tensorial resolution: a direct trilinear decomposition. Journal of Chemometrics 4(1), 2945. doi: 10.1002/cem.1180040105.CrossRefGoogle Scholar
Sedoglavic, A. and Smirnov, A. V. (2021). The tensor rank of 5x5 matrices multiplication is bounded by 98 and its border rank by 89. doi: 10.1145/3452143.3465537.CrossRefGoogle Scholar
Shashua, A. and Hazan, T. (2005). Non-negative tensor factorization with applications to statistics and computer vision. In Proceedings of the 22nd International Conference on Machine Learning (ICML'05), pp. 792799. doi: 10.1145/1102351.1102451.CrossRefGoogle Scholar
Sherman, S. and Kolda, T. G. (2020). Estimating higher-order moments using symmetric tensor decomposition. SIAM Journal on Matrix Analysis and Applications 41(3), 13691387. doi: 10.1137/19m1299633.CrossRefGoogle Scholar
Sidiropoulos, N. D. and Bro, R. (2000). On the uniqueness of multilinear decomposition of N-way arrays. Journal of Chemometrics 14(3), 229239. doi: 10.1002/1099-128X(200005/06)14:3<229::AID-CEM587>3.0.CO;2-N.3.0.CO;2-N>CrossRefGoogle Scholar
Sidiropoulos, N. D., De Lathauwer, L., Fu, X., et al. (2017). Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing 65(13), 35513582. doi: 10.1109/tsp.2017.2690524.CrossRefGoogle Scholar
Smilde, A., Bro, R., and Geladi, P. (2004). Multi-Way Analysis: Applications in the Chemical Sciences. Chichester: Wiley.CrossRefGoogle Scholar
Smirnov, A. V. (2013). The bilinear complexity and practical algorithms for matrix multiplication. Computational Mathematics and Mathematical Physics 53(12), 17811795. doi: 10.1134/ S0965542513120129.CrossRefGoogle Scholar
Smith, S. and Karypis, G. (2015). Tensor-matrix products with a compressed sparse tensor. In Proceedings of the 5th Workshop on Irregular Applications: Architectures and Algorithms (IA3 ’15). doi: 10.1145/2833179.2833183.CrossRefGoogle Scholar
Sorber, L., Van Barel, M., and De Lathauwer, L. (2013). Optimization-based algorithms for tensor decompositions: canonical polyadic decomposition, decomposition in rank-(Lr,Lr,1) terms, and a new generalization. SIAM Journal on Optimization 23(2), 695720. doi: 10.1137/120868 323.CrossRefGoogle Scholar
Sorensen, M. and De Lathauwer, L. (2010). New simultaneous generalized Schur decomposition methods for the computation of the canonical polyadic decomposition. In 2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers. doi: 10.1109/ ACSSC.2010.5757456.Google Scholar
Springer, P., Hammond, J. R., and Bientinesi, P. (2017). TTC: a high-performance compiler for tensor transpositions. ACM Transactions on Mathematical Software 44(2), 121. doi: 10.1145/3104988.Google Scholar
Stegeman, A. and Sidiropoulos, N. D. (2007). On Kruskal's uniqueness condition for the CANDE- COMP/PARAFAC decomposition. Linear Algebra and its Applications 420(2–3), 540552. doi: 10.1016/j.laa.2006.08.010.CrossRefGoogle Scholar
Strang, G. (2016). Introduction to Linear Algebra. 5th ed. Wellesley: Wellesley-Cambridge Press.Google Scholar
Strassen, V. (1969). Gaussian elimination is not optimal. Numerische Mathematik 13(4), 354356. doi: 10.1007/BF02165411.CrossRefGoogle Scholar
Sumi, T., Sakata, T., and Miyazaki, M. (2013). Typical ranks for m × n × (m — 1)n tensors with m ≤ n. Linear Algebra and its Applications 438(2), 953958. doi: 10.1016/j.laa.2011. 08.009.CrossRefGoogle Scholar
Sun, Y., Guo, Y., Luo, C., Tropp, J., and Udell, M. (2020). Low-rank Tucker approximation of a tensor from streaming data. SIAM Journal on Mathematics of Data Science 2(4), 11231150. doi: 10.1137/19m1257718.CrossRefGoogle Scholar
ten Berge, J. M. F. (1991). Kruskal's polynomial for 2 × 2 × 2 arrays and a generalization to 2 × n × n arrays. Psychometrika 56(4), 631636. doi: 10.1007/BF02294495.CrossRefGoogle Scholar
ten Berge, J. (2000a). “The k-rank of a Khatri–Rao product.” Unpublished Note, Heijmans Institute of Psychological Research, University of Groningen, the Netherlands.Google Scholar
ten Berge, J. M. F. (2000b). The typical rank of tall three-way arrays. Psychometrika 65(4), 525532. doi: 10.1007/BF02296342.CrossRefGoogle Scholar
ten Berge, J.M.F. (2004). Partial uniqueness in CANDECOMP/PARAFAC. Journal of Chemometrics 18(1), 1216. doi: 10.1002/cem.839.CrossRefGoogle Scholar
ten Berge, J.M.F. (2011). Simplicity and typical rank results for three-way arrays. Psychometrika 76(1), 312. doi: 10.1007/S11336-010-9193-1.CrossRefGoogle Scholar
ten Berge, J. M. F. and Kiers, H. A. L. (1999). Simplicity of core arrays in three-way principal component analysis and the typical rank of p × q × 2 arrays. Linear Algebra and its Applications 294(1–3), 169179. doi: 10.1016/S0024-3795(99)00057-9.CrossRefGoogle Scholar
ten Berge, J. M. F. and Sidiriopolous, N. D. (2002). On uniqueness in CANDECOMP/PARAFAC. Psychometrika 67(3), 399409. doi: 10.1007/BF02294992.CrossRefGoogle Scholar
ten Berge, J. M. F. and Stegeman, A. (2006). Symmetry transformations for square sliced three-way arrays, with applications to their typical rank. Linear Algebra and Applications 418(1), 215224. doi: 10.1016/j.laa.2006.02.002.CrossRefGoogle Scholar
ten Berge, J.M.F. and Tendeiro, J.N. (2009). The link between sufficient conditions by Harshman and by Kruskal for uniqueness in Candecomp/Parafac. Journal of Chemometrics 23(7–8), 321323. doi: 10.1002/cem.1204.CrossRefGoogle Scholar
ten Berge, J.M.F., Kiers, H.A.L., and de Leeuw, J. (1988). Explicit CANDECOMP/PARAFAC solutions for a contrived 2 × 2 × 2 array of rank three. Psychometrika 53(4), 579583. doi: 10.1007/BF02294409.CrossRefGoogle Scholar
ten Berge, J. M. F., Sidiropoulos, N. D., and Rocci, R. (2004). Typical rank and INDSCAL dimensionality for symmetric three-way arrays of order I × 2 × 2 or I × 3 × 3. Linear Algebra and its Applications 388, 363377. doi: 10.1016/j.laa.2004.03.00 9.CrossRefGoogle Scholar
TensorFlow Team (2022). Working with Sparse Tensors. TensorFlow Guide. URL: www.tensorflow.org/guide/sparse_tensor (accessed July 29, 2024).Google Scholar
Tobler, C. (2012). Low-Rank Tensor Methods for Linear Systems and Eigenvalue Problems. PhD thesis. ETH Zurich. URL: http://sma.epfl.ch/~anchpcommon/students/tobler.pdf (accessed July 29, 2024).Google Scholar
Tomasi, G. and Bro, R. (2005). PARAFAC and missing values. Chemometrics and Intelligent Laboratory Systems 75(2), 163180. doi: 10.1016/j.chemolab.2004.07.003.CrossRefGoogle Scholar
Tomasi, G. and Bro, R. (2006). A comparison of algorithms for fitting the PARAFAC model. Computational Statistics & Data Analysis 50(7), 17001734. doi: 10.1016/j.csda.2004.11. 013.CrossRefGoogle Scholar
Trefethen, L.N. and Bau, D. (1997). Numerical Linear Algebra. Philadelphia: SIAM. doi: 10.1137/1.9780898719574.CrossRefGoogle Scholar
Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279311. doi: 10.1007/BF02289464.CrossRefGoogle ScholarPubMed
Uschmajew, A. (2010). Well-posedness of convex maximization problems on Stiefel manifolds and orthogonal tensor product approximations. Numerische Mathematik 115(2), 309331. doi: 10.1007/s00211-009-0276-9.CrossRefGoogle Scholar
Uschmajew, A. (2012). Local convergence of the alternating least squares algorithm for canonical tensor approximation. SIAM Journal on Matrix Analysis and Applications 33(2), 639652. doi: 10.1137/110843587.CrossRefGoogle Scholar
Vandecappelle, M., Vervliet, N., and De Lathauwer, L. (2020). A second-order method for fitting the canonical polyadic decomposition with non-least-squares cost. IEEE Transactions on Signal Processing 68, 44544465. doi: 10.1109/tsp.2020.3010719.CrossRefGoogle Scholar
Vannieuwenhoven, N., Vandebril, R., and Meerbergen, K. (2012). A new truncation strategy for the higher-order singular value decomposition. SIAM Journal on Scientific Computing 34(2), A1027A1052. doi: 10.1137/110836067.CrossRefGoogle Scholar
Vavasis, S. A. (2009). On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization 20(3), 13641377. doi: 10.1137/070709967.CrossRefGoogle Scholar
Vervliet, N. and De Lathauwer, L. (2016). A randomized block sampling approach to canonical polyadic decomposition of large-scale tensors. IEEE Journal of Selected Topics in Signal Processing 10(2), 284295. doi: 10.1109/JSTSP.2015.2503260.CrossRefGoogle Scholar
Vervliet, N. and De Lathauwer, L. (2019). Numerical optimization-based algorithms for data fusion. In Data Handling in Science and Technology. Amsterdam: Elsevier, pp. 81128. doi: 10.1016 /b978-0-444-63984-4.00004-1.Google Scholar
Vervliet, N., Debals, O., Sorber, L., Van Barel, M., and De Lathauwer, L. (2017). Datasets: Dense, Incomplete, Sparse and Structured. TensorLab User Manual. URL: www.tensorlab.net/doc/data.html#sparse-tensors (accessed July 29, 2024).Google Scholar
Vyas, S., Even-Chen, N., Stavisky, S.D., et al. (2018). Neural population dynamics underlying motor learning transfer. Neuron 97(5). doi: 10.1016/j.neuron.2018.01.040.CrossRefGoogle ScholarPubMed
Vyas, S., O'Shea, D. J., Ryu, S. I., and Shenoy, K. V. (2020). Causal role of motor preparation during error-driven learning. Neuron 106(2). doi: 10.1016/j.neuron.2020.01.019.CrossRefGoogle ScholarPubMed
Welling, M. and Weber, M. (2001). Positive tensor factorization. Pattern Recognition Letters 22(12), 12551261. doi: 10.1016/S0167-8655(01)00070-8.CrossRefGoogle Scholar
Williams, A. H., Kim, T. H., Wang, F., et al. (2018). Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor components analysis. Neuron 98(6), 10991115. doi: 10.1016/j.neuron.2018.05.015.CrossRefGoogle Scholar
Williams, V. V., Xu, Y., Xu, Z., and Zhou, R. (2024). New bounds for matrix multiplication: from alpha to omega. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 37923835. doi: 10.1137/1.9781611977912.134.CrossRefGoogle Scholar
Winograd, S. (1971). On multiplication of 2 × 2 matrices. Linear Algebra and its Applications 4(4), 381388. doi: 10.1016/0024-3795(71)90009-7.CrossRefGoogle Scholar
Wright, S. J. and Recht, B. (2022). Optimization for Data Analysis. Cambridge: Cambridge University Press. doi: 10.1017/9781009004282.CrossRefGoogle Scholar
Wu, X., Ward, R., and Bottou, L. (2018). WNGrad: Learn the Learning Rate in Gradient Descent. arXiv: 1803.02865v1.Google Scholar
Zhang, Z. and Aeron, S. (2017). Exact tensor completion using t-SVD. IEEE Transactions on Signal Processing 65(6), 15111526. doi: 10.1109/tsp.2016.2639466.CrossRefGoogle Scholar
Zhao, K., Di, S., Lian, X., et al. (2020). SDRBench: scientific data reduction benchmark for lossy compressors. In 2020 IEEE International Conference on Big Data. doi: 10.1109/bigdata50022.2020.9378449.CrossRefGoogle Scholar
Zhao, Q., Zhou, G., Xie, S., Zhang, L., and Cichocki, A. (2016). Tensor Ring Decomposition. arXiv: 1606.05535.Google Scholar
Zhou, G., Cichocki, A., and Xie, S. (2014). Decomposition of Big Tensors with Low Multilinear Rank. arXiv: 1412.1885.Google Scholar
Zhou, S., Vinh, N.X., Bailey, J., Jia, Y., and Davidson, I. (2016). Accelerating online CP decompositions for higher order tensors. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16). doi: 10.1145/2939672.2939763.CrossRefGoogle Scholar
Zhu, C., Byrd, R. H., Lu, P., and Nocedal, J. (1997). Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software 23(4), 550560. doi: 10.1145/279232.279236.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • References
  • Grey Ballard, Wake Forest University, North Carolina, Tamara G. Kolda, MathSci.ai
  • Book: Tensor Decompositions for Data Science
  • Online publication: 05 June 2025
  • Chapter DOI: https://doi.org/10.1017/9781009471664.027
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • References
  • Grey Ballard, Wake Forest University, North Carolina, Tamara G. Kolda, MathSci.ai
  • Book: Tensor Decompositions for Data Science
  • Online publication: 05 June 2025
  • Chapter DOI: https://doi.org/10.1017/9781009471664.027
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • References
  • Grey Ballard, Wake Forest University, North Carolina, Tamara G. Kolda, MathSci.ai
  • Book: Tensor Decompositions for Data Science
  • Online publication: 05 June 2025
  • Chapter DOI: https://doi.org/10.1017/9781009471664.027
Available formats
×