jackkuo commited on
Commit
554a28b
·
verified ·
1 Parent(s): 38d97d6

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -tAyT4oBgHgl3EQfqfjJ/content/tmp_files/2301.00545v1.pdf.txt +3319 -0
  2. -tAyT4oBgHgl3EQfqfjJ/content/tmp_files/load_file.txt +0 -0
  3. .gitattributes +58 -0
  4. 19E0T4oBgHgl3EQfugE5/content/tmp_files/2301.02605v1.pdf.txt +1611 -0
  5. 19E0T4oBgHgl3EQfugE5/content/tmp_files/load_file.txt +0 -0
  6. 2NFAT4oBgHgl3EQfDRyP/content/tmp_files/2301.08415v1.pdf.txt +1220 -0
  7. 2NFAT4oBgHgl3EQfDRyP/content/tmp_files/load_file.txt +0 -0
  8. 2tE2T4oBgHgl3EQfjAfh/vector_store/index.faiss +3 -0
  9. 39AzT4oBgHgl3EQfD_qi/content/tmp_files/2301.00986v1.pdf.txt +1464 -0
  10. 39AzT4oBgHgl3EQfD_qi/content/tmp_files/load_file.txt +0 -0
  11. 3NFQT4oBgHgl3EQfGjVY/content/tmp_files/2301.13245v1.pdf.txt +1622 -0
  12. 3NFQT4oBgHgl3EQfGjVY/content/tmp_files/load_file.txt +0 -0
  13. 49E1T4oBgHgl3EQf6QXo/content/2301.03522v1.pdf +3 -0
  14. 49E1T4oBgHgl3EQf6QXo/vector_store/index.faiss +3 -0
  15. 49E1T4oBgHgl3EQf6QXo/vector_store/index.pkl +3 -0
  16. 4NAyT4oBgHgl3EQfcPdw/content/tmp_files/2301.00278v1.pdf.txt +685 -0
  17. 4NAyT4oBgHgl3EQfcPdw/content/tmp_files/load_file.txt +0 -0
  18. 5NFJT4oBgHgl3EQfkiwH/content/tmp_files/2301.11579v1.pdf.txt +0 -0
  19. 5NFJT4oBgHgl3EQfkiwH/content/tmp_files/load_file.txt +0 -0
  20. 6dE3T4oBgHgl3EQfRAnz/content/2301.04418v1.pdf +3 -0
  21. 6dE3T4oBgHgl3EQfRAnz/vector_store/index.faiss +3 -0
  22. 6dE3T4oBgHgl3EQfRAnz/vector_store/index.pkl +3 -0
  23. 89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf +3 -0
  24. 89AzT4oBgHgl3EQfFPox/vector_store/index.pkl +3 -0
  25. 8dFAT4oBgHgl3EQfpB03/content/2301.08637v1.pdf +3 -0
  26. 8dFAT4oBgHgl3EQfpB03/vector_store/index.faiss +3 -0
  27. 8dFAT4oBgHgl3EQfpB03/vector_store/index.pkl +3 -0
  28. 9NFST4oBgHgl3EQfaziV/content/tmp_files/2301.13797v1.pdf.txt +1418 -0
  29. 9tAzT4oBgHgl3EQfg_wg/content/tmp_files/2301.01476v1.pdf.txt +1696 -0
  30. 9tAzT4oBgHgl3EQfg_wg/content/tmp_files/load_file.txt +0 -0
  31. A9FJT4oBgHgl3EQfry2f/content/2301.11610v1.pdf +3 -0
  32. A9FJT4oBgHgl3EQfry2f/vector_store/index.pkl +3 -0
  33. AtFLT4oBgHgl3EQfxTCi/content/tmp_files/2301.12167v1.pdf.txt +2577 -0
  34. AtFLT4oBgHgl3EQfxTCi/content/tmp_files/load_file.txt +0 -0
  35. C9E1T4oBgHgl3EQfEANP/content/2301.02884v1.pdf +3 -0
  36. C9E1T4oBgHgl3EQfEANP/vector_store/index.faiss +3 -0
  37. C9E1T4oBgHgl3EQfEANP/vector_store/index.pkl +3 -0
  38. CdE1T4oBgHgl3EQfpwUV/content/2301.03334v1.pdf +3 -0
  39. CdE1T4oBgHgl3EQfpwUV/vector_store/index.pkl +3 -0
  40. CdE5T4oBgHgl3EQfTw8s/vector_store/index.faiss +3 -0
  41. CdE5T4oBgHgl3EQfTw8s/vector_store/index.pkl +3 -0
  42. DdE4T4oBgHgl3EQf6A7h/content/tmp_files/2301.05329v1.pdf.txt +1942 -0
  43. DdE4T4oBgHgl3EQf6A7h/content/tmp_files/load_file.txt +0 -0
  44. EdE2T4oBgHgl3EQfSgfT/content/2301.03794v1.pdf +3 -0
  45. EdE2T4oBgHgl3EQfSgfT/vector_store/index.faiss +3 -0
  46. EdE2T4oBgHgl3EQfSgfT/vector_store/index.pkl +3 -0
  47. EdFRT4oBgHgl3EQfBDfd/vector_store/index.faiss +3 -0
  48. GdE1T4oBgHgl3EQf_Aah/vector_store/index.faiss +3 -0
  49. HNE4T4oBgHgl3EQfHwxv/content/tmp_files/2301.04906v1.pdf.txt +1679 -0
  50. HNE4T4oBgHgl3EQfHwxv/content/tmp_files/load_file.txt +0 -0
-tAyT4oBgHgl3EQfqfjJ/content/tmp_files/2301.00545v1.pdf.txt ADDED
@@ -0,0 +1,3319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2
+ 1
3
+ Knockoffs-SPR: Clean Sample Selection in
4
+ Learning with Noisy Labels
5
+ Yikai Wang, Yanwei Fu, and Xinwei Sun.
6
+ Abstract—A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper,
7
+ we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first
8
+ present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In
9
+ SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR
10
+ can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy
11
+ data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression
12
+ with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To
13
+ improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in
14
+ parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a
15
+ standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as
16
+ unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our
17
+ framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
18
+ Index Terms—Learning with Noisy Labels, Knockoffs Method, Type-Two Error Control.
19
+ !
20
+ 1
21
+ INTRODUCTION
22
+ D
23
+ EEP learning has achieved remarkable success on
24
+ many supervised learning tasks trained by millions
25
+ of labeled training data. The performance of deep models
26
+ heavily relies on the quality of label annotation since neural
27
+ networks are susceptible to noisy labels and even can easily
28
+ memorize randomly labeled annotations [1]. Such noisy
29
+ labels can lead to the degradation of the generalization
30
+ and robustness of such models. Critically, it is expensive
31
+ and difficult to obtain precise labels in many real-world
32
+ scenarios, thus exposing a realistic challenge for supervised
33
+ deep models to learn with noisy data.
34
+ There are many previous efforts in tackling this challenge by
35
+ making the models robust to noisy data, such as modifying
36
+ the network architectures [2]–[5] or loss functions [6]–[9].
37
+ This paper addresses the challenge by directly selecting
38
+ clean samples. Inspired by the dynamic sample selection
39
+ methods [9]–[16], we construct a “virtuous” cycle between
40
+ sample selection and network training: the selected clean
41
+ samples can improve the network training; and on the
42
+ other hand, the improved network has a more powerful
43
+ ability in picking up clean data. As this cycle evolves, the
44
+ performance can be improved. To well establish this cycle, a
45
+ key question remains: how to effectively differentiate clean data
46
+ from noisy ones?
47
+ Preliminary. Typical principles in existing works [9]–[16]
48
+ to differentiate clean data from noisy data include large
49
+ loss [11], inconsistent prediction [17], and irregular feature
50
+
51
+ Yikai Wang and Yanwei Fu contribute equally.
52
+
53
+ Xinwei Sun is the corresponding author.
54
+
55
+ Yikai Wang, Yanwei Fu and Xinwei Sun are with the School of
56
+ Data Science, Fudan University. E-mail: {yikaiwang19, yanweifu,
57
+ sunxinwei}@fudan.edu.cn
58
+ representation [18]. The former two principles identify
59
+ irregular behaviors in the label space, while the last one
60
+ analyzes the instance representations of the same class in
61
+ the feature space. In this paper, we propose unifying the
62
+ label and feature space by making the linear relationship,
63
+ yi = x⊤
64
+ i β + ε,
65
+ (1)
66
+ between feature-label pair (xi ∈ Rp: feature vector; yi ∈ Rc:
67
+ one-hot label vector) of data i. We also have the fixed
68
+ (unknown) coefficient matrix β ∈ Rp×c, and random noise
69
+ ε ∈ Rc. Essentially, the linear relationship here is an ideal
70
+ approximation, as the networks are trained to minimize
71
+ the divergence between a (soft-max) linear projection of
72
+ the feature and a one-hot label vector. For a well-trained
73
+ network, the output prediction of clean data is expected
74
+ to be as similar to a one-hot vector as possible, while the
75
+ entropy of the output of noisy data should be large. Thus if
76
+ the underlying linear relation is well-approximated without
77
+ soft-max operation, the corresponding data is likely to be
78
+ clean. In contrast, the feature-label pair of noisy data may
79
+ not be approximated well by the linear model.
80
+ The simplest way to measure the goodness of the linear
81
+ model in fitting the feature-label pair is to check the
82
+ prediction error, or residual, ri = yi − x⊤
83
+ i ˆβ, where ˆβ is the
84
+ estimate of β. The larger ∥r∥ indicates a larger fitting error
85
+ and thus more possibility for instance i to be outlier/noisy
86
+ data. Many methods have been proposed to test whether ri
87
+ is non-zero. Particularly, we highlight the classical statistical
88
+ leave-one-out approach [19] that computes the studentized
89
+ residual as,
90
+ ti =
91
+ yi − x⊤
92
+ i ˆβ−i
93
+ ˆσ−i
94
+
95
+ 1 + x⊤
96
+ i
97
+ �X⊤
98
+ −iX−i
99
+ �−1 xi
100
+ �1/2 ,
101
+ (2)
102
+ arXiv:2301.00545v1 [cs.LG] 2 Jan 2023
103
+
104
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
105
+ 2
106
+ Images
107
+ Features
108
+ Noisy Labels
109
+ Stage 1:
110
+ Network Learning
111
+ Classifier
112
+ Stage 2:
113
+ Sample Selection
114
+ β ∈ Rd×c
115
+ Noisy Labels
116
+ Y ∈ Rn×c
117
+ X ∈ Rn×d
118
+ Features
119
+ Noisy Data Indicator
120
+ =
121
+ +
122
+ γ ∈ Rn×c
123
+ Yi (1, 0, 0)
124
+ (0, 1, 0) ˜Yi
125
+ Permute
126
+ Compare
127
+ Selected Clean Data
128
+ Fig. 1. Knockoffs-SPR runs a cycle between network learning and sample selection, where clean data are selected via the comparison of the
129
+ mean-shift parameters between its original label and permuted label.
130
+ where ˆσ is the scale estimate and the subscript −i indicates
131
+ estimates based on the n − 1 observations, leaving out
132
+ the i-th data for testing. Equivalently, the linear regression
133
+ model can be re-formulated into explicitly representing the
134
+ residual,
135
+ Y = Xβ + γ + ε,
136
+ εi,j ∼ N(0, σ2),
137
+ (3)
138
+ by introducing a mean-shift parameter γ as in [20] with the
139
+ feature X ∈ Rn×p, and label Y ∈ Rn×c paired and stacked
140
+ by rows. For each row of γ ∈ Rn×c, γi represents the predict
141
+ residual of the corresponding data. This formulation has
142
+ been widely studied in different research topics, including
143
+ economics [21]–[24], robust regression [20], [25], statistical
144
+ ranking [26], face recognition [27], semi-supervised few-shot
145
+ learning [28], [29], and Bayesian preference learning [30],
146
+ to name a few. This formulation is differently focused on
147
+ the specific research tasks. For example, for the robust
148
+ regression problem [20], [25], the target is to get a robust
149
+ estimate ˆβ against the influence of γ. Here for solving the
150
+ problem of learning with noisy labels, we are interested
151
+ in recovering zeros elements of γ, since these elements
152
+ correspond to clean data.
153
+ SPR [31]. To this end, from the statistical perspective,
154
+ our conference report [31] starts from Eq. (3) to build up
155
+ a sample selection framework, dubbed Scalable Penalized
156
+ Regression (SPR). With a sparse penalty P(γ; λ) on γ, the
157
+ SPR obtains a regularization solution path of γ(λ) by
158
+ evolving λ from ∞ to 0. Then it identifies those samples
159
+ that are earlier (or at larger λ) selected to be non-zeros
160
+ as noisy data and those later selected as clean data, with
161
+ a manually specified ratio of selected data. Under the
162
+ irrepresentable condition [4], [33], the SPR enjoys model
163
+ selection consistency in the sense that it can recover the set
164
+ of noisy data. By feeding only clean data into next-round
165
+ training, the trained network is less corrupted by the noisy
166
+ data and hence performs well empirically.
167
+ Knockoffs-SPR. However, the irrepresentable condition
168
+ demands the prior of the ground-truth noisy set, which
169
+ is not accessible in practice. When this condition fails,
170
+ the trained network with SPR may be still corrupted by
171
+ a large proportion of noisy data, leading to performance
172
+ degradation as empirically verified in our experiments. To
173
+ amend this problem, we provide a data-adaptive sample
174
+ selection algorithm, in order to well control the expected
175
+ rate of noisy data in the selected data under the desired level
176
+ q, e.g., q = 0.05. As the goal is to identify clean data for the
177
+ next-round training, we term this rate as the False-Selection-
178
+ Rate (FSR). The FSR is the expected rate of the type-II error
179
+ in sparse regression, as non-zero elements correspond to
180
+ the noisy data. Our method to achieve the FSR control is
181
+ inspired by the ideas of Knockoffs in Statistics, which is
182
+ a recently developed framework for variable selection [1],
183
+ [2], [34], [35]. The Knockoffs framework aims at selecting
184
+ non-null variables and controlling the False-Discovery-Rate
185
+ (FDR), by taking as negative controls knockoff features ˜
186
+ X,
187
+ which are constructed as a fake copy for the original features
188
+ X. Here, the FDR corresponds to the expectation of the
189
+ type-I error rate in sparse regression. Therefore, the vanilla
190
+ Knockoffs cannot be directly applied to our SPR framework,
191
+ since FSR is the expected rate of the type-II error and there
192
+ is no theoretical guarantee in Knockoffs for this control. To
193
+ achieve the FSR control, we propose Knockoffs-SPR, which
194
+ turns to construct the knockoff labels ˜Y via permutation for
195
+ the original label Y , and incorporates it into a data-partition
196
+ strategy for FSR control.
197
+ Formally, we repurpose the knockoffs in Statistics in our
198
+ SPR method; and propose a novel data-adaptive sample
199
+ selection algorithm, dubbed Knockoffs-SPR. It extends SPR
200
+ in controlling the ratio of noisy data among the selected
201
+ clean data. With this new property, Knockoffs-SPR ensures
202
+ that the clean pattern is dominant in the data and hence
203
+ leads to better network training. Specifically, we partition
204
+ the whole noisy training set into two random subsets and
205
+ apply the Knockoffs-SPR to two subsets separately. For each
206
+ time, we use one subset to estimate the intercept β and the
207
+ other to select the clean data by comparing between the
208
+ solution paths of γ(λ) and ˜γ(λ) that respectively obtained
209
+ via regression on noisy labels and the permuted labels. With
210
+ such a decoupled structure between β and γ, we prove
211
+ that the FSR can be controlled by any prescribed level.
212
+ Compared with the original theory of SPR, our new theory
213
+ enables us to effectively select clean data under general
214
+ conditions. Besides, Knockoffs-SPR also enjoys a superior
215
+ performance over the original SPR.
216
+ Together with network training, the whole framework is
217
+ illustrated in Fig. 1 in which the sample selection and
218
+ the network learning are well incorporated into each
219
+ other. Specifically, we run the network learning process
220
+ and sample selection process iteratively and repeat this
221
+ cycle until convergence. To incorporate Knockoffs-SPR into
222
+ the end-to-end training pipeline of deep architecture, the
223
+ simplest way is to directly solve Knockoffs-SPR for each
224
+
225
+ featureA
226
+ B
227
+ C
228
+ DJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
229
+ 3
230
+ training mini-batch or training epoch to select clean data.
231
+ Solving Knockoffs-SPR for each mini-batch is efficient but
232
+ suffers from the identifiability issue. The sample size in a
233
+ mini-batch may be too small to distinguish clean patterns
234
+ from noisy ones among all classes, especially for large
235
+ datasets with small batch size. Solving Knockoffs-SPR for
236
+ the whole training set is powerful but suffers from the
237
+ complexity issue, leading to an unacceptable computation
238
+ cost. To resolve these two problems, we strike a balance
239
+ between complexity and identifiability by proposing a
240
+ splitting strategy that divides the whole data into small
241
+ pieces such that each piece is class-balanced with the proper
242
+ sample size. In this regard, the sample size of each piece is
243
+ small enough to be solved efficiently and large enough to
244
+ distinguish clean patterns from noisy ones. Then Knockoffs-
245
+ SPR runs on each piece in parallel, making it scalable to
246
+ large datasets.
247
+ As the removed noisy data still contain useful information
248
+ for network training, we adopt the semi-supervised training
249
+ pipeline with CutMix [38] where the noisy data are utilized
250
+ as unlabeled data. We conduct extensive experiments to
251
+ validate the effectiveness of our framework on several
252
+ benchmark datasets and real-world noisy datasets. The
253
+ results show the efficacy of our Knockoffs-SPR algorithm.
254
+ Contributions. Our contributions are as follows:
255
+ • Ideologically, we propose to control the False-Selection-
256
+ Rate in selecting clean data, under general scenarios.
257
+ • Methodologically, we propose Knockoffs-SPR, a data-
258
+ adaptive method to control the FSR.
259
+ • Theoretically, we prove that the Knockoffs-SPR can
260
+ control the FSR under any desired level.
261
+ • Algorithmically, we propose a splitting algorithm for
262
+ better sample selection with balanced identifiability and
263
+ complexity in large datasets.
264
+ • Experimentally, we demonstrate the effectiveness and
265
+ efficiency of our method on several benchmark datasets
266
+ and real-world noisy datasets.
267
+ Extensions. Our conference version of this work, SPR, was
268
+ published in [31]. Compared with SPR [31], we have the
269
+ following extensions.
270
+ • We identify the limitation of the SPR and consider the FSR
271
+ control in selecting clean data.
272
+ • We propose a new framework: Knockoffs-SPR which is
273
+ effective in selecting clean data under general scenarios,
274
+ theoretically and empirically.
275
+ • We apply our method on Clothing1M and achieve better
276
+ results than compared baselines.
277
+ Logistics. The rest of this paper is organized as follows:
278
+ • In Section 9, we introduce our SPR algorithm with its
279
+ noisy set recovery theory.
280
+ • In Section 3, the Knockoffs-SPR algorithm is introduced
281
+ with its FSR control theorem.
282
+ • In Section 4, several training strategies are proposed to
283
+ well incorporate the Knockoffs-SPR with the network
284
+ training.
285
+ • In Section 5, connections are made between our proposed
286
+ works and several previous works.
287
+ • In Section 6, we conduct experiments on several synthetic
288
+ and real-world noisy datasets.
289
+ • Section 7 concludes this paper.
290
+ 2
291
+ CLEAN SAMPLE SELECTION
292
+ 2.1
293
+ Problem Setup
294
+ We are given a dataset of image-label pairs {(imgi, yi)}n
295
+ i=1,
296
+ where the noisy label yi is corrupted from the ground-
297
+ truth label y∗
298
+ i . The ground-truth label y∗
299
+ i and the corruption
300
+ process are unknown. Our target is to learn a recognition
301
+ model f(·) such that it can recognize the true category y∗
302
+ i
303
+ from the image imgi, i.e., f(imgi) = y∗
304
+ i , after training on the
305
+ noisy label yi.
306
+ In this paper, we adopt deep neural networks as the
307
+ recognition model and divide the f(·) into fc(g(·)) where
308
+ g(·) is the deep model for feature extraction and fc(·) is
309
+ the final fully-connected layer for classification. For each
310
+ input image imgi, the feature extractor g(·) is used to encode
311
+ the feature xi := g(imgi). Then the fully-connected layer is
312
+ used to output the score vector ˆyi = fc(xi) which indicates
313
+ the chance it belongs to each class and the prediction is
314
+ provided with ˆyi = argmax(ˆyi).
315
+ As the training data contain many noisy labels, simply
316
+ training from all the data leads to severe degradation
317
+ of generalization and robustness. Intuitively, if we could
318
+ identify the clean labels from the noisy training set, and
319
+ train the network with the clean data, we can reduce the
320
+ influence of noisy labels and achieve better performance and
321
+ robustness of the model. To achieve this, we thus propose a
322
+ sample selection algorithm to identify the clean data in the
323
+ noisy training set with theoretical guarantees.
324
+ Notation. In this paper, we will use a to represent scalar, a
325
+ to represent a vector, and A to represent a matrix. We will
326
+ annotate a∗ to denote the ground-truth value of a. We use
327
+ ∥ · ∥F to denote the Frobenius norm.
328
+ 2.2
329
+ Clean Sample Selection via Penalized Regression
330
+ Motivated
331
+ by
332
+ the
333
+ leave-one-out
334
+ approach
335
+ for
336
+ outlier
337
+ detection, we introduce an explicit noisy data indicator γi
338
+ for each data and assume a linear relation between extracted
339
+ feature xi and one-hot label yi with noisy data indicator as,
340
+ yi = x⊤
341
+ i β + γi + εi,
342
+ (4)
343
+ where yi ∈ Rc is one-hot vector; and xi ∈ Rp, β ∈
344
+ Rp×c, γi ∈ Rc, εi ∈ Rc. The noisy data indicator γi can be
345
+ regarded as the correction of the linear prediction. For clean
346
+ data, yi ∼ N(x⊤
347
+ i β∗, σ2Ic) with γ∗
348
+ i = 0, and for noisy data
349
+ y∗
350
+ i = yi −γ∗
351
+ i ∼ N(x⊤
352
+ i β∗, σ2). We denote C := {i : γ∗
353
+ i = 0}
354
+ as the ground-truth clean set.
355
+ To select clean data for training, we propose Scalable
356
+ Penalized Regression (SPR), designed as the following sparse
357
+ learning paradigm,
358
+ argmin
359
+ β,γ
360
+ 1
361
+ 2 ∥Y − Xβ − γ∥2
362
+ F + P(γ; λ),
363
+ (5)
364
+ where we have the matrix formulation X ∈ Rn×p, and
365
+ Y
366
+
367
+ Rn×c of {xi, yi}n
368
+ i=1; and P(·; λ) is a row-wise
369
+
370
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
371
+ 4
372
+ Fig. 2. Solution Path of SPR. Red lines indicate noisy data while blue
373
+ lines indicate clean data. As λ decreases, the γi gradually solved with
374
+ non-zero values.
375
+ sparse penalty with coefficient parameter λ. So we have
376
+ P(γ; λ) = �n
377
+ j=1 P(γi; λ), e.g., group-lasso sparsity with
378
+ P(γ; λ) = λ �
379
+ i ∥γi∥2.
380
+ To estimate C, we only need to solve γ with no need to
381
+ estimate β. Thus to simplify the optimization, we substitute
382
+ the Ordinary Least Squares (OLS) estimate for β with γ
383
+ fixed into Eq. (5). To ensure that ˆβ is identifiable, we apply
384
+ PCA on X to make p ≪ n so that the X has full-column
385
+ rank. Denote
386
+ ˜
387
+ X = I − X
388
+ �X⊤X
389
+ �† X⊤, ˜Y
390
+ =
391
+ ˜
392
+ XY , the
393
+ Eq. (5) is transformed into
394
+ argmin
395
+ γ
396
+ 1
397
+ 2
398
+ ��� ˜Y − ˜
399
+
400
+ ���
401
+ 2
402
+ F + P(γ; λ),
403
+ (6)
404
+ which is a standard sparse linear regression for γ. Note that
405
+ in practice we can hardly choose a proper λ that works well
406
+ in all scenarios. Furthermore, from the equivalence between
407
+ the penalized regression problem and Huber’s M-estimate,
408
+ the solution of γ is returned with soft-thresholding. Thus
409
+ it is not worth finding the precise solution of a single γ.
410
+ Instead, we use a block-wise descent algorithm [39] to solve
411
+ γ with a list of λs and generate the solution path. As
412
+ λ changes from ∞ to 0, the influence of sparse penalty
413
+ decreases, and γi are gradually solved with non-zero values,
414
+ in other words, selected by the model, as visualized in
415
+ Fig. 2. Since earlier selected instance is more possible to be
416
+ noisy, we rank all samples in the descendent order of their
417
+ selecting time defined as:
418
+ Zi = sup {λ : γi (λ) ̸= 0} .
419
+ (7)
420
+ A large Zi means that the γi is earlier selected. Then the top
421
+ samples are identified as noisy data and the other samples
422
+ are selected as clean data. In practice, we select 50% of the
423
+ data as clean data.
424
+ 2.3
425
+ The Theory of Noisy Set Recovery in SPR
426
+ The SPR enjoys theoretical guarantees that the noisy data
427
+ set can be fully recovered with high probability, under
428
+ the irrepresentable condition [33]. Formally, consider the
429
+ vectorized version of Eq. (6):
430
+ argmin
431
+ ⃗γ
432
+ 1
433
+ 2
434
+ ���⃗y − ˚
435
+ X⃗γ
436
+ ���
437
+ 2
438
+ 2 + λ ∥⃗γ∥1 ,
439
+ (8)
440
+ where ⃗y, ⃗γ is vectorized from Y , γ in Eq. (6); ˚
441
+ X = Ic ⊗ ˜
442
+ X
443
+ with ⊗ denoting the Kronecker product operator. Denote
444
+ S := supp(⃗γ∗), which is the noisy set Cc. We further denote
445
+ ˚
446
+ XS (resp. ˚
447
+ XSc) as the column vectors of ˚
448
+ X whose indexes
449
+ are in S (resp. Sc) and µ ˚
450
+ X = maxi∈Sc ∥ ˚
451
+ X∥2
452
+ 2. Then we have
453
+ Theorem 1 (Noisy set recovery). Assume that:
454
+ C1, Restricted eigenvalue: λmin( ˚
455
+ X⊤
456
+ S ˚
457
+ XS) = Cmin > 0;
458
+ C2, Irrepresentability: there exists a η ∈ (0, 1], such that
459
+ ∥ ˚
460
+ X⊤
461
+ Sc ˚
462
+ XS( ˚
463
+ X⊤
464
+ S ˚
465
+ XS)−1∥∞ ≤ 1 − η;
466
+ C3, Large error:
467
+ ⃗γ∗
468
+ min := mini∈S |⃗γ∗
469
+ i | > h(λ, η, ˚
470
+ X, ⃗γ∗);
471
+ where ∥A∥∞
472
+ :=
473
+ maxi
474
+
475
+ j |Ai,j|, and h(λ, η, ˚
476
+ X, ⃗γ∗)
477
+ =
478
+ λη/�Cminµ ˚
479
+ X + λ∥( ˚
480
+ X⊤
481
+ S ˚
482
+ XS)−1sign(⃗γ∗
483
+ S)∥∞.
484
+ Let λ ≥
485
+ 2σ√µ ˚
486
+ X
487
+ η
488
+ √log cn. Then with probability greater than
489
+ 1 − 2(cn)−1, model Eq. (8) has a unique solution ˆ⃗γ such that: 1)
490
+ If C1 and C2 hold, ˆ
491
+ Cc ⊆ Cc;2) If C1, C2 and C3 hold, ˆ
492
+ Cc = Cc.
493
+ We present the proof in the appendix, following the
494
+ treatment in [4], [40]. In this theorem, C1 is necessary to get
495
+ a unique solution, and in our case is mostly satisfied with
496
+ the natural assumption that the clean data is the majority
497
+ in the training data. If C2 holds, the estimated noisy data
498
+ is the subset of truly noisy data. This condition is the key
499
+ to ensuring the success of SPR, which requires divergence
500
+ between clean and noisy data such that we cannot represent
501
+ clean data with noisy data. If C3 further holds, the estimated
502
+ noisy data is exactly all the truly noisy data. C3 requires the
503
+ error measured by γi is large enough to be identified from
504
+ random noise. If the conditions fail, SPR will fail in a non-
505
+ vanishing probability, not deterministic.
506
+ 3
507
+ CONTROLLED CLEAN SAMPLE SELECTION
508
+ In the last section, we stop the solution path at λ such
509
+ that 50% samples are selected as clean data. If this happens
510
+ to be the rate of clean data, Thm. 1 shows that our SPR
511
+ can identify the clean data C under the irrepresentable
512
+ condition. However, the irrepresentable condition and the
513
+ information of the ground-truth clean set C are practically
514
+ unknown, making this theory hard to be used in the real
515
+ life. Particularly, with |Cc| unknown, the algorithm can stop
516
+ at an improper time such that the noisy rate of the selected
517
+ clean data ˆC can be still high, making the next-round trained
518
+ model corrupted a lot by noisy patterns.
519
+ To resolve the problem of false selection in SPR , we in this
520
+ section propose a data-adaptive early stopping method for
521
+ the solution path, that targets controlling the expected noisy
522
+ rate of the selected data dubbed as False-Selection-Rate (FSR)
523
+ under the desired level q (0 < q < 1):
524
+ FSR = E
525
+
526
+ �#
527
+
528
+ j : j ̸∈ H0 ∩ ˆC
529
+
530
+ #
531
+
532
+ j : j ∈ ˆC
533
+
534
+ ∨ 1
535
+
536
+ � ,
537
+ (9)
538
+ where ˆC = {j : ˆγj = 0} is the recovered clean set of
539
+ γ, and H0 : γ∗
540
+ i = 0 denotes the null hypothesis, i.e., the
541
+ sample i belonging to the clean dataset. Therefore, the FSR
542
+ in Eq. (9) targets controlling the false rate among selected
543
+ null hypotheses, which is also called the expected rate of
544
+ the type-II error in hypothesis testing.
545
+
546
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
547
+ 5
548
+ 3.1
549
+ Knockoffs-SPR
550
+ To achieve the FSR control, we propose the Knockoffs-
551
+ SPR for clean sample selection. Our method is inspired
552
+ by knockoff methods [1], [2], [34], [35], [41] with the
553
+ different focus that we target selecting clean labels via
554
+ permutation instead of constructing knockoff features to
555
+ select explanatory variables. Specifically, under model (4)
556
+ we permute the label for each data and construct the
557
+ permutation ˜y. Then model (4) can be solved for y and
558
+ ˜y to obtain the solution paths γ(λ) and ˜γ(λ), respectively.
559
+ We will show that this construction can pick up clean data
560
+ from noisy ones, by comparing the selecting time (Eq. (7))
561
+ between γ(λ) and ˜γ(λ) for each data. On the basis of this
562
+ construction, we propose to partition the whole dataset
563
+ into two disjoint parts, with one part for estimating β
564
+ and the other for learning γ(λ) and ˜γ(λ). We will show
565
+ that the independent structure with such a data partition
566
+ enables us to construct the comparison statistics whose sign
567
+ patterns among alternative hypotheses (noisy data) are the
568
+ independent Bernoulli processes, which is crucial for FSR
569
+ control.
570
+ Formally speaking, we split the whole data D into D1 :=
571
+ (X1, Y1) and D2
572
+ :=
573
+ (X2, Y2) with ni
574
+ :=
575
+ |Di|, and
576
+ implement Knockoffs-SPR on both D1 and D2. In the
577
+ following, we only introduce the procedure on D2, as the
578
+ procedure for D1 shares the same spirit. Roughly speaking,
579
+ the procedure is composed of three steps: i) estimate β on
580
+ D1; ii) estimate ˜γ(λ)) on D2; and iii) construct the comparison
581
+ statistics and selection filters. We leave detailed discussions for
582
+ each step in Sec. 3.2.
583
+ Step i): Estimating β on D1. Our target is to provide
584
+ an estimate of β that is independent of D2. The simplest
585
+ strategy is to use the standard OLS estimator to obtain
586
+ ˆβ1. However, this estimator may not be accurate since it
587
+ is corrupted by noisy samples. For this consideration, we
588
+ first run SPR on D1 to get clean data and then solve β via
589
+ OLS on the estimated clean data.
590
+ Step ii): Estimating (γ(λ), ˜γ(λ)) on D2. After obtaining the
591
+ solution ˆβ1 on D1 , we learn the γ(λ) on D2:
592
+ 1
593
+ 2
594
+ ���Y2 − X2 ˆβ1 − γ2
595
+ ���
596
+ 2
597
+ F + P(γ2; λ).
598
+ (10)
599
+ For each one-hot encoded vector y2,j, we randomly permute
600
+ the position of 1 and obtain another one-hot vector ˜y2,j ̸=
601
+ y2,j. For clean data j, the ˜y2,j turns to be a noisy label;
602
+ while for noisy data, the ˜y2,j is switched to another noisy
603
+ label with probability c−2
604
+ c−1 or clean label with probability
605
+ 1
606
+ c−1. After obtaining the permuted matrix as ˜Y2, we learn
607
+ the solution paths (γ2(λ), ˜γ2(λ)) using the same algorithm
608
+ as SPR via:
609
+
610
+
611
+
612
+
613
+
614
+ 1
615
+ 2
616
+ ���Y2 − X2 ˜β1 − γ2
617
+ ���
618
+ 2
619
+ F + �
620
+ j P(γ2,j; λ),
621
+ 1
622
+ 2
623
+ ��� ˜Y2 − X2 ˜β1 − ˜γ2
624
+ ���
625
+ 2
626
+ F + �
627
+ j P(˜γ2,j; λ).
628
+ (11)
629
+ Step iii): Comparison statistics and selection filters.
630
+ After obtaining the solution path (γ2(λ), ˜γ2(λ)), we define
631
+ sample significance scores with respect to y2,i and ˜y2,i of
632
+ each i respectively, as the selection time: Zi := sup{λ :
633
+ Algorithm 1 Knockoffs-SPR
634
+ Input: subsets D1 and D2.
635
+ Output: clean set of D2.
636
+ 1: Use D1 to fit an linear regression model and get β(D1);
637
+ 2: Generate permuted label of each sample i in D2;
638
+ 3: Solve Eq. (26) for D2 and generate {Wi} by Eq. (12);
639
+ 4: Initialize q = 0.02 and T = 0;
640
+ 5: while q < 0.5 and T = 0 do
641
+ 6:
642
+ Compute T by Eq. (13);
643
+ 7:
644
+ q = q + 0.02;
645
+ 8: end while
646
+ 9: if T is 0 then
647
+ 10:
648
+ Construct clean set via half of the samples with largest
649
+ Wi in Eq. (14) with T = ∞;
650
+ 11: else
651
+ 12:
652
+ Construct clean set via samples in Eq. (14);
653
+ 13: end if
654
+ 14: return clean set.
655
+ ∥γ2,i(λ)∥2 ̸= 0} and ˜Zi := sup{λ : ∥˜γ2,i(λ)∥2 ̸= 0}. With
656
+ Zi, ˜Zi, we define the Wi as:
657
+ Wi := Zi · sign(Zi − ˜Zi).
658
+ (12)
659
+ Based on these statistics, we define a data-dependent
660
+ threshold T as
661
+ T = max
662
+
663
+ t > 0 : 1 + # {j : 0 < Wj ≤ t}
664
+ # {j : −t ≤ Wj < 0} ∨ 1 ≤ q
665
+
666
+ ,
667
+ (13)
668
+ or T = 0 if this set is empty, where q is the pre-defined upper
669
+ bound. Our algorithm will select the clean subset identified
670
+ by
671
+ C2 := {j : −T ≤ Wj < 0}.
672
+ (14)
673
+ Empirically, T may be equal to 0 if the threshold q is
674
+ sufficiently small. In this regard, no clean data are selected,
675
+ which is meaningless. Therefore, we start with a small q and
676
+ iteratively increase q and calculate T, until an attainable T
677
+ such that T > 0 to bound the FSR as small as possible. In
678
+ practice, when the FSR cannot be bounded by q = 50%,
679
+ we will end the selection and simply select half of the most
680
+ possible clean examples via {Wj}. The whole procedure of
681
+ Knockoffs-SPR is shown in Algorithm 1.
682
+ 3.2
683
+ Statistical Analysis about Knockoffs-SPR
684
+ In this part, we present the motivations and intuitions of
685
+ each step in Knockoffs-SPR.
686
+ Data Partition. Knockoffs-SPR partitions the dataset D
687
+ into two subset D1 and D2. This step decomposes the
688
+ dependency of the estimate of β and γ in that we use D1/D2
689
+ to estimate β/γ, respectively. Then ˆβ(D1) is independent of
690
+ ˆγ(D2) if D1 and D2 are disjoint. The independent estimation
691
+ of β and γ makes it provable for FSR control on D2.
692
+ Permutation. As we discussed in step ii, when the original
693
+ label is clean, its permuted label will be a noisy label. On
694
+ the other hand, if the original label is noisy, its permuted
695
+ label changes to clean with probability
696
+ 1
697
+ c−1 and noisy with
698
+
699
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
700
+ 6
701
+ probability
702
+ c−2
703
+ c−1, where c denotes the number of classes.
704
+ Note that γ of noisy data is often selected earlier than that
705
+ of clean data in the solution path. This implies larger Z
706
+ values for noisy data than those for clean data. As a result,
707
+ according to the definition of W, a clean sample will ideally
708
+ have a small negative of W := Z · sign(Z − ˜Z), where Z
709
+ and ˜Z respectively correspond to the clean label and noisy
710
+ label. In contrast for a noisy sample, the W tends to have
711
+ a large magnitude and has approximately equal probability
712
+ to be positive or negative. Such a different behavior of W
713
+ between clean and noisy data can help us to identify clean
714
+ samples from noisy ones.
715
+ Asymmetric comparison statistics W. The classical way to
716
+ define comparison statistics is in a symmetric manner, i.e.,
717
+ Wi := Zi ∨ ˜Zi · sign(Zi − ˜Zi). In this way, a clean sample
718
+ with a noisy permuted label tends to have a large |Wi|, as
719
+ we expect the noisy label to have a large ˜Zi. However, this is
720
+ against our target as we only require clean samples to have
721
+ a small magnitude. For this purpose, we design asymmetric
722
+ comparison statistics that only consider the magnitude of
723
+ the original labels.
724
+ To see the asymmetric behavior of W
725
+ for noisy and
726
+ clean data, we consider the Karush–Kuhn–Tucker (KKT)
727
+ conditions of Eq. (26) with respect to (γ2,i, ˜γ2,i)
728
+ γ2,i + ∂P(γ2,i; λ)
729
+ ∂γ2,i
730
+ = x⊤
731
+ 2,i(β∗ − ˆβ1) + γ∗
732
+ 2,i + ε(2),i,
733
+ (15a)
734
+ ˜γ2,i + ∂P(˜γ2,i; λ)
735
+ ∂˜γ2,i
736
+ = x⊤
737
+ 2,i(β∗ − ˆβ1) + ˜γ∗
738
+ 2,i + ˜ε(2),i,
739
+ (15b)
740
+ where ε(2),i ∼i.i.d ˜ε(2),i, |γ∗
741
+ 2,i| = |˜γ∗
742
+ 2,i| if both y2,i and
743
+ ˜y2,i are noisy, and P(γ2,i; λ) := λ|γ2,i| as an example. By
744
+ conditioning on ˆβ1 and denoting ai := x⊤
745
+ 2,i(β∗ − ˆβ1), we
746
+ have that
747
+ P(Wi > 0) = P(|ai+γ∗
748
+ 2,i+ε2,i| > |ai+ ˜γ∗
749
+ 2,i+ ˜ε(2),i|). (16)
750
+ Then it can be seen that if i is clean, we have γ∗
751
+ 2,i = 0.
752
+ Then Zi tends to be small and besides, it is probable to have
753
+ Zi < ˜Zi if ˆβ1 can estimate β∗ well. As a result, Wi tends
754
+ to be a small negative. On the other hand, if i is noisy, then
755
+ Zi tends to be large for γi to account for the noisy pattern,
756
+ and besides, it has equal probability between Zi < ˜Zi and
757
+ Zi ≥ ˜Zi when ˜y2,i is switched to another noisy label, with
758
+ probability
759
+ c−2
760
+ c−1. So Wi tends to have a large value and
761
+ besides,
762
+ P(Wi > 0) = P(Wi > 0|˜y2,i is noisy)P(˜y2,i is noisy)
763
+ + P(Wi > 0|˜y2,i is clean)P(˜y2,i is clean) = 1
764
+ 2 · c − 2
765
+ c − 1
766
+ + P(Wi > 0|˜y2,i is clean) ·
767
+ 1
768
+ c − 1,
769
+ (17)
770
+ which falls in the interval of
771
+
772
+ c−2
773
+ c−1 · 1
774
+ 2,
775
+ c
776
+ c−1 · 1
777
+ 2
778
+
779
+ . That is to say,
780
+ P(Wi > 0) ≈ 1
781
+ 2. In this regard, the clean data corresponds
782
+ to small negatives of W in the ideal case, which can help
783
+ to discriminate noisy data with large W with almost equal
784
+ probability to be positive or negative.
785
+ Remark. For noisy y2,i, we have P(Wi > 0|˜y2,i is noisy) =
786
+ 1/2 by assuming |γ∗
787
+ 2,i| = |˜γ∗
788
+ 2,i|. However, it may not hold
789
+ in practice when y2,i corresponds to the noisy pattern that
790
+ has been learned by the model. In this regard, it may
791
+ have |γ∗
792
+ 2,i| < |˜γ∗
793
+ 2,i| for a randomly permuted label ˜y2,i.
794
+ To resolve this problem, we instead set the permutation
795
+ label as the most confident candidate of the model, please
796
+ refer to Sec. 4.1 for details. Besides, if ˆβ1 can accurately
797
+ estimate β∗, according to KKT conditions in Eq. (15), we
798
+ have P(Wi > 0) < 1/2. That is Wi tends to be negative for
799
+ the clean data, which is beneficial for clean sample selection.
800
+ Data-adaptive
801
+ threshold.
802
+ The
803
+ proposed
804
+ data-adaptive
805
+ threshold T
806
+ is directly designed to control the FSR.
807
+ Specifically, the FSR defined in Eq. (9) is equivalent to
808
+ FSR(t) = E
809
+ �# {j : γj ̸= 0 and − t ≤ Wj < 0}
810
+ # {j : −t ≤ Wj < 0} ∨ 1
811
+
812
+ ,
813
+ (18)
814
+ where the denominator denotes the number of selected
815
+ clean data according to Eq. (14) and the nominator denotes
816
+ the number of falsely selected noisy data. This form of
817
+ Eq. (18) can be further decomposed into,
818
+ E
819
+ � # {γj ̸= 0, −t ≤ Wj < 0}
820
+ 1 + # {γj ̸= 0, 0 < Wj ≤ t} · 1 + # {γj ̸= 0, 0 < Wj ≤ t}
821
+ # {−t ≤ Wj < 0} ∨ 1
822
+
823
+ ≤ E
824
+ � # {γj ̸= 0, −t ≤ Wj < 0}
825
+ 1 + # {γj ̸= 0, 0 < Wj ≤ t}
826
+ 1 + # {0 < Wj ≤ t}
827
+ # {−t ≤ Wj < 0} ∨ 1
828
+
829
+ ≤ E
830
+ � # {γj ̸= 0, −t ≤ Wj < 0}
831
+ 1 + # {γj ̸= 0, 0 < Wj ≤ t}q
832
+
833
+ ,
834
+ (19)
835
+ where the last inequality comes from the definition of
836
+ T in Eq. (13). To control the FSR, it suffices to bound
837
+ E
838
+
839
+ #{γj̸=0, −t≤Wj<0}
840
+ 1+#{γj̸=0, 0<Wj≤t}
841
+
842
+ . Roughly speaking, this term means
843
+ the number of negative W to the number of positive
844
+ W, among noisy data. Since W
845
+ for noisy data has
846
+ approximately equal probability to be positive/negative
847
+ as mentioned earlier, intuitively we have this term ≈
848
+ 1
849
+ 2.
850
+ Formally, we construct a martingale process of 1(Wi > 0)
851
+ among noisy data, which is independent of the magnitude
852
+ |W| due to data partition. We leave these details in the
853
+ appendix.
854
+ 3.3
855
+ FSR Control of Knockoffs-SPR
856
+ Our target is to show that FSR ≤ q with our data-adaptive
857
+ threshold T in Eq. (13). Our main result is as follows:
858
+ Theorem 2 (FSR control). For c-class classification task, and for
859
+ all 0 < q ≤ 1, the solution of Knockoffs-SPR holds
860
+ FSR(T) ≤ q
861
+ (20)
862
+ with the threshold T for two subsets defined respectively as
863
+ Ti = max
864
+
865
+ t ∈ W : 1 + # {j : 0 < Wj ≤ t}
866
+ # {j : −t ≤ Wj < 0} ∨ 1 ≤ c − 2
867
+ 2c q
868
+
869
+ .
870
+ We present the proof in the appendix. The coefficient
871
+ 1/2 comes from the subset-partition strategy that we run
872
+ Knockoffs-SPR on two D1 and D2, and the term
873
+ c−2
874
+ c
875
+ comes from the upper-bound of the first part in Eq. (19).
876
+ This theorem tells us that FSR can be controlled by the
877
+ given threshold q using the procedure of Knockoffs-SPR.
878
+ Compared to SPR, this procedure is more practical and
879
+ useful in real-world experiments and we demonstrate its
880
+ utility in Sec. 6.3 for more details.
881
+
882
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
883
+ 7
884
+ Algorithm 2 Knockoffs-SPR on full training set
885
+ Input: Noisy feature-label pairs {(xi, yi)}n
886
+ i=1, group class
887
+ size N sample size m, (Optional) clean set.
888
+ Output: clean set.
889
+ 1: if Number of classes > N then
890
+ 2:
891
+ Compute class prototypes using Eq. (22);
892
+ 3:
893
+ Divide classes into groups using Eq. (21);
894
+ 4: else
895
+ 5:
896
+ Use all classes as a single group;
897
+ 6: end if
898
+ 7: Construct pieces with uniformly sampled m examples
899
+ for each class (total=N × m);
900
+ 8: for each piece do
901
+ 9:
902
+ Randomly partition the piece into two sub-pieces A and
903
+ B (each contains Nm/2 examples);
904
+ 10:
905
+ Run Algorithm 1 (B, A) on A to get clean-set-A;
906
+ 11:
907
+ Run Algorithm 1 (A, B) on B to get clean-set-B;
908
+ 12:
909
+ Concat clean-set-A and clean-set-B to get clean-set-piece;
910
+ 13: end for
911
+ 14: Concat clean-set-pieces to get clean set;
912
+ 15: return clean set.
913
+ 4
914
+ LEARNING WITH KNOCKOFFS-SPR
915
+ In this section, we introduce how to incorporate Knockoffs-
916
+ SPR into the training of neural networks. We first introduce
917
+ several implementation details of Knockoffs-SPR, then we
918
+ introduce a splitting algorithm that makes Knockoffs-SPR
919
+ scalable to large-scale datasets. Finally, we discuss some
920
+ training strategies to better utilize the selected clean data.
921
+ 4.1
922
+ Knockoffs-SPR in Practice
923
+ We introduce several strategies to improve FSR control and
924
+ the power of selecting clean samples, which are inspired by
925
+ different behaviors of W between noisy and clean samples.
926
+ Ideally, for a clean sample i, Wi is expected to be a small
927
+ negative; if i is noisy data, Wi tends to be large and is
928
+ approximately 50% to be positive or negative, as shown
929
+ in Eq. (17). To achieve these properties for better clean
930
+ sample selection, the following strategies are proposed, in
931
+ the procedure of feature extractor, data-preprocessing, label
932
+ permutation strategy, estimating β on D1, and clean data
933
+ identification in Eq. (13), (14).
934
+ Feature Extractor. A good feature extractor is essential for
935
+ clean sample selection algorithms. In our experiments, we
936
+ adopt the self-supervised training method SimSiam [42] to
937
+ pre-train the feature extractor, to make X well encode the
938
+ information of the training data in the early stages.
939
+ Data Preprocessing. We implement PCA on the features
940
+ extracted by neural network for dimension reduction. This
941
+ can make X of full rank, which ensures the identifiability of
942
+ ˆβ in SPR. Besides, such a low dimensionality can make the
943
+ model estimate β more accurately. According to the KKT
944
+ conditions Eq. (15), we have that Wi of clean data i tends
945
+ to be negative with small magnitudes. In this regard, the
946
+ model can have better power of clean sample selection, i.e.,
947
+ selecting more clean samples while controlling FSR.
948
+ Label
949
+ Permutation
950
+ Strategy.
951
+ Instead
952
+ of
953
+ the
954
+ random
955
+ permutation strategy, our Knckoff-SPR permutes the label
956
+ as the most-confident candidate provided by the model at
957
+ each training stage, for FSR consideration especially when
958
+ the noise rate is high or some noisy pattern is dominant
959
+ in the data. Specifically, if the pattern of some noisy label
960
+ y2,i is learned by the model, then γ∗
961
+ 2,i may have a smaller
962
+ magnitude than that of ˜γ∗
963
+ 2,i for a randomly permuted
964
+ label ˜y2,i that may not be learned by the model, violating
965
+ P(Wi > 0|˜y2,i) = 1/2 and hence P(Wi > 0) ≈ 1/2 in
966
+ practice. In contrast, the most confident permutation can
967
+ alleviate this problem, because the most confident label ˜y2,i
968
+ can naturally have a small magnitude of ˜γ∗
969
+ 2,i.
970
+ Estimating β on D1. We implement SPR as the first step
971
+ to learn β on D1. Compared to vanilla OLS, the SPR
972
+ can remove some noisy patterns from data, and hence
973
+ can achieve an accurate estimate of β. Similar to the data
974
+ processing step, such an accurate estimation can improve the
975
+ power of selecting clean samples.
976
+ Clean data identification in Eq. (13), (14). We calculate T
977
+ among W for each class, and identify the clean subset for
978
+ each class, to improve the power of clean data for each
979
+ class. In practice, since some classes may be easier to learn
980
+ than others, the Wi for i in these classes have smaller
981
+ magnitudes. Therefore, data from these classes will take the
982
+ main proportion if we calculate T and identify C2 among all
983
+ classes. With this design, the clean data are more balanced,
984
+ which facilitates the training in the next epochs.
985
+ 4.2
986
+ Scalable to Large Dataset
987
+ The computation cost of the sample selection algorithm
988
+ increases with the growth of the training sample, making
989
+ it not scalable to large datasets. To resolve this problem,
990
+ we propose to split the total training set into many pieces,
991
+ each of which contains a small portion of training categories
992
+ with a small number of training data. With the splitting
993
+ strategy, we can run the Knockoffs-SPR on several pieces
994
+ in parallel and significantly reduce the running time. For
995
+ the splitting strategy, we notice that the key to identifying
996
+ clean data is leveraging different behavior in terms of the
997
+ magnitude and the sign of W. Such a difference can be
998
+ alleviated if the patterns from clean classes are similar to the
999
+ noisy ones, which may lead to unsatisfactory recall/power
1000
+ of identifying the clean set. This motivates us to group
1001
+ similar categories together, to facilitate the discrimination
1002
+ of clean data from noisy ones.
1003
+ Formally speaking, we define the similarity between the
1004
+ class i and j as
1005
+ s(i, j) = p⊤
1006
+ i pj,
1007
+ (21)
1008
+ where p represents the class prototype. To obtain pi for the
1009
+ class i, we take the clean features xi of each class extracted
1010
+ by the network along the training iteration, and average
1011
+ them to get the class prototype pc after the current training
1012
+ epoch ends, as
1013
+ pc =
1014
+ �n
1015
+ i=1 xi1(yi = c, i ∈ C)
1016
+ �n
1017
+ i=1 1(yi = c, i ∈ C) ,
1018
+ (22)
1019
+ Then the most similar classes are grouped together. In
1020
+ the initialization step when the clean set has not been
1021
+
1022
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
1023
+ 8
1024
+ Algorithm 3 Training with Knockoffs-SPR.
1025
+ Input: Noisy dataset {(imgi, xi, yi)}n
1026
+ i=1, p.
1027
+ Output: Trained network.
1028
+ Initialization :
1029
+ 1: Model: A self-supervised pre-trained backbone with
1030
+ a random initialized fully-connected layer, an EMA
1031
+ model;
1032
+ 2: Initial clean set: Run Algorithm 2 with self-supervised
1033
+ pre-trained feature and noisy labels;
1034
+ Training Process:
1035
+ 3: for ep = 0 to max epochs do
1036
+ 4:
1037
+ for each mini-batch do
1038
+ 5:
1039
+ Sample r from U(0, 1);
1040
+ 6:
1041
+ if r > p then
1042
+ 7:
1043
+ Train the network using Eq. (25);
1044
+ 8:
1045
+ else
1046
+ 9:
1047
+ Train the network using Eq. (24);
1048
+ 10:
1049
+ end if
1050
+ 11:
1051
+ Update features x visited in current mini-batch;
1052
+ 12:
1053
+ Update EMA model;
1054
+ 13:
1055
+ end for
1056
+ 14:
1057
+ Run Algorithm 2 on {(xi, yi)}n
1058
+ i=1 to get clean set;
1059
+ 15: end for
1060
+ 16: return Trained network.
1061
+ estimated yet, we simply use all the data to calculate the
1062
+ class prototypes. In our experiments, each group is designed
1063
+ to have 10 classes.
1064
+ For the instances in each group, we split the training data
1065
+ of each class in a balanced way such that each piece contains
1066
+ the same number of instances for each class. The number
1067
+ is determined to ensure that the clean pattern remains the
1068
+ majority in the piece, such that optimization can be done
1069
+ easily. In practice, we select 75 training data from each
1070
+ class to construct the piece. When the class proportion
1071
+ is imbalanced in the original dataset, we adopt the over-
1072
+ sampling strategy to sample the instance of each class with
1073
+ less training data multiple times to ensure that each training
1074
+ instance is selected once in some piece. The pipeline of our
1075
+ splitting algorithm is described in Algorithm 2.
1076
+ 4.3
1077
+ Network Learning with Knockoffs-SPR
1078
+ When training with Knockoffs-SPR, we can further exploit
1079
+ the support of noisy data by incorporating Knockoffs-
1080
+ SPR with semi-supervised algorithms. In this paper, we
1081
+ interpolate part of images between clean data and noisy
1082
+ data as in CutMix [38],
1083
+ ˜
1084
+ img = M ⊙ imgclean + (1 − M) ⊙ imgnoisy
1085
+ (23a)
1086
+ ˜y = λyclean + (1 − λ)ynoisy
1087
+ (23b)
1088
+ where M ∈ {0, 1}W ×H is a binary mask, ⊙ is element-
1089
+ wise multiplication, λ ∼ Beta(0.5, 0.5) is the interpolation
1090
+ coefficient, and the clean and noisy data are identified
1091
+ by Knockoffs-SPR. Then we train the network using the
1092
+ interpolated data using
1093
+ L
1094
+
1095
+ ˜
1096
+ img, ˜y
1097
+
1098
+ = LCE
1099
+
1100
+ ˜
1101
+ img, ˜y
1102
+
1103
+ .
1104
+ (24)
1105
+ Empirically, we could switch between this semi-supervised
1106
+ training with standard supervised training on estimated
1107
+ clean data.
1108
+ L (imgi, yi) = 1i/∈O · LCE (imgi, yi) ,
1109
+ (25)
1110
+ where 1i/∈O is the indicator function, which means that
1111
+ only the cross-entropy loss of estimated clean data is
1112
+ used to calculate the loss. We further store a model with
1113
+ EMA-updated weights. Our full algorithm is illustrated in
1114
+ Algorithm 3. Neural networks trained with this pipeline
1115
+ enjoy powerful recognition capacity in several synthetic and
1116
+ real-world noisy datasets.
1117
+ 5
1118
+ RELATED WORK
1119
+ Here we make the connections between our Knockoffs-SPR
1120
+ and previous research efforts.
1121
+ 5.1
1122
+ Learning with Noisy Labels
1123
+ The target of Learning with Noisy Labels (LNL) is to
1124
+ train a more robust model from the noisy dataset. We can
1125
+ roughly categorize LNL algorithms into two groups: robust
1126
+ algorithm and noise detection. A robust algorithm does not
1127
+ focus on specific noisy data but designs specific modules
1128
+ to ensure that networks can be well-trained even from the
1129
+ noisy datasets. Methods following this direction includes
1130
+ constructing robust network [2]–[5], robust loss function [6]–
1131
+ [9] robust regularization [43]–[46] against noisy labels.
1132
+ The noise detection method aims to identify the noisy
1133
+ data and design specific strategies to deal with the noisy
1134
+ data, including down-weighting the importance in the loss
1135
+ function for the network training [47], re-labeling them to
1136
+ get correct labels [48], or regarding them as unlabeled data
1137
+ in the semi-supervised manner [49], etc.
1138
+ For the noise detection algorithm, noisy data are identified
1139
+ by some irregular patterns, including large error [14],
1140
+ gradient directions [50], disagreement within multiple
1141
+ networks [15], inconsistency along the training path [17] and
1142
+ some spatial properties in the training data [18], [51]–[53].
1143
+ Some algorithms [50], [54] rely on the existence of an extra
1144
+ clean set to detect noisy data.
1145
+ After detecting the clean data, the simplest strategy is to
1146
+ train the network using the clean data only or re-weight
1147
+ the data [55] to eliminate the noise. Some algorithms [49],
1148
+ [56] regard the detected noisy data as unlabeled data to
1149
+ fully exploit the distribution support of the training set in
1150
+ the semi-supervised learning manner. There are also some
1151
+ studies of designing label-correction module [2], [48], [54],
1152
+ [57]–[59] to further pseudo-labeling the noisy data to train
1153
+ the network. Few of these approaches are designed from the
1154
+ statistical perspective with non-asymptotic guarantees, in
1155
+ terms of clean sample selection. In contrast, our Knockoffs-
1156
+ SPR can theoretically control the false-selected rate in
1157
+ selecting clean samples under general scenarios.
1158
+ 5.2
1159
+ Mean-Shit Parameter
1160
+ Mean-shift
1161
+ parameters
1162
+ or
1163
+ incidental
1164
+ parameters
1165
+ [21]
1166
+ originally tackled to solve the robust estimation problem
1167
+
1168
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
1169
+ 9
1170
+ via penalized estimation [60] . With a different focus on
1171
+ specific parameters, this formulation address wide attention
1172
+ in different research topics, including economics [21]–[24],
1173
+ robust regression [20], [25], statistical ranking [26], face
1174
+ recognition [27], semi-supervised few-shot learning [28],
1175
+ [29], and Bayesian preference learning [30], to name a
1176
+ few. Previous work usually uses this formulation to solve
1177
+ robust linear models, while in this paper we adopt this to
1178
+ select clean data and help the training of neural networks.
1179
+ Furthermore, we design an FSR control module and a
1180
+ scalable sample selection algorithm based on mean-shift
1181
+ parameters with theoretical guarantees.
1182
+ 5.3
1183
+ Knockoffs
1184
+ Knockoffs was first proposed in [34] as a data-adaptive
1185
+ method to control FDR of variable selection in the sparse
1186
+ regression problem. This method was then extended to
1187
+ high-dimension regression [1], [61], multi-task regression
1188
+ [35], outlier detection [41] and structural sparsity [2]. The
1189
+ core of Knockoffs is to construct a fake copy of X as
1190
+ negative controls of original features, in order to select true
1191
+ positive features with FDR control. Our Knockoffs-SPR is
1192
+ inspired by but different from the classical knockoffs in the
1193
+ following aspects: i) the Knockoff is to control the FDR,
1194
+ i.e., the expected rate of the type-I error while our goal is
1195
+ to control the expected rate of the type-II error, a.k.a, FSR,
1196
+ in the noisy data scenario; ii) instead of constructing copy
1197
+ for X, we construct the copy ˜Y via permutation. Equipped
1198
+ with a calibrated data-partitioning strategy, our method can
1199
+ control the FER under any desired level.
1200
+ 6
1201
+ EXPERIMENTS
1202
+ Datasets. We validate the effectiveness of Knockoffs-
1203
+ SPR on synthetic noisy datasets CIFAR-10 and CIFAR-
1204
+ 100 [62], and real-world noisy datasets WebVision [63]
1205
+ and Clothing1M [2]. We consider two types of noisy
1206
+ labels for CIFAR: (i) Symmetric noise: Every class is
1207
+ corrupted uniformly with all other labels; (ii) Asymmetric
1208
+ noise: Labels are corrupted by similar (in pattern) classes.
1209
+ WebVision has 2.4 million images collected from the
1210
+ internet with the same category list as ImageNet ILSVRC12.
1211
+ Clothing1M has 1 million images collected from the internet
1212
+ and labeled by the surrounding texts. Thus, the WebVision
1213
+ and Clothing1M datasets can be regarded as real-world
1214
+ challenges.
1215
+ Backbones. For CIFAR, we use ResNet-18 [64] as our
1216
+ backbone. For WebVision we use Inception-ResNet [65] to
1217
+ extract features to follow previous works. For Clothing1M
1218
+ we use ResNet-50 as backbones. For CIFAR and WebVision,
1219
+ we respectively self-supervised pretrain for 100 epochs and
1220
+ 350 epochs using SimSiam [42]. For Clothing1M, we use
1221
+ ImageNet pre-trained weights to follow previous works.
1222
+ Hyperparameter setting. We use SGD to train all the
1223
+ networks with a momentum of 0.9 and a cosine learning
1224
+ rate decay strategy. The initial learning rate is set as 0.01.
1225
+ The weight decay is set as 1e-4 for Clothing1M, and 5e-
1226
+ 4 for other datasets. We use a batch size of 128 for all
1227
+ experiments. We use random crop and random horizontal
1228
+ flip as augmentation strategies. The network is trained
1229
+ for 180 epochs for CIFAR, 300 epochs for WebVision, and
1230
+ 5 epochs for Clothing1M. Network training strategy is
1231
+ selected with p = 0.5 (line 6 in Alg. 3) for Clothing1M, while
1232
+ for other datasets we only use CutMix training. For features
1233
+ used in Knockoffs-SPR, we reduce the dimension of X to
1234
+ the number of classes. For Clothing1M, this is 14 and for
1235
+ other datasets the reduced dimension is 10 (each piece of
1236
+ CIFAR-100 and WebVision contains 10 classes). We also run
1237
+ SPR with our new network training algorithm (Alg. 3).
1238
+ 6.1
1239
+ Evaluation on Synthetic Label Noise
1240
+ Competitors. We use cross-entropy loss (Standard) as the
1241
+ baseline algorithm for two datasets. We compare Knockoffs-
1242
+ SPR with algorithms that include Forgetting [66] with
1243
+ train the network using dropout strategy, Bootstrap [67]
1244
+ which trains with bootstrapping, Forward Correction [55]
1245
+ which corrects the loss function to get a robust model,
1246
+ Decoupling
1247
+ [68]
1248
+ which
1249
+ uses
1250
+ a
1251
+ meta-update
1252
+ strategy
1253
+ to
1254
+ decouple
1255
+ the
1256
+ update
1257
+ time
1258
+ and
1259
+ update
1260
+ method,
1261
+ MentorNet [12] which uses a teacher network to help train
1262
+ the network, Co-teaching [11] which uses two networks to
1263
+ teach each other, Co-teaching+ [15] which further uses an
1264
+ update by disagreement strategy to improve Co-teaching,
1265
+ IterNLD [51] which uses an iterative update strategy,
1266
+ RoG [52] which uses generated classifiers, PENCIL [59]
1267
+ which uses a probabilistic noise correction strategy, GCE [7]
1268
+ and SL [8] which are extensions of the standard cross-
1269
+ entropy loss function, and TopoFilter [18] which uses feature
1270
+ representation to detect noisy data. For each dataset, all the
1271
+ experiments are run with the same backbone to make a
1272
+ fair comparison. We randomly run all the experiments five
1273
+ times and calculate the average and standard deviation of
1274
+ the accuracy of the last epoch. The results of competitors are
1275
+ reported in [18].
1276
+ As in Table 1, Knockoffs-SPR enjoys a higher performance
1277
+ compared with other competitors on CIFAR, validating the
1278
+ effectiveness of Knockoffs-SPR on different noise scenarios.
1279
+ SPR enjoys better performance on higher symmetric noise
1280
+ rate of CIFAR-100. This may contributes to the manual
1281
+ selection threshold of 50% of the data. Then SPR will
1282
+ select more data than Knockoffs-SPR, for example in Sym.
1283
+ 80% noise scenario SPR will select 24816 clean data while
1284
+ Knockoffs-SPR will select 18185 clean data. This leads to
1285
+ a better recovery of clean data (recall of 94.22% while
1286
+ Knockoffs-SPR is 81.20%) and thus a better recognition
1287
+ capacity.
1288
+ 6.2
1289
+ Evaluation on Real-World Noisy Datasets
1290
+ In this part, we compare Knockoffs-SPR with other methods
1291
+ on real-world noisy datasets: WebVision and Clothing1M.
1292
+ We follow previous work to train and test on the first 50
1293
+ classes of WebVision. We also evaluate models trained on
1294
+ WebVision to ILSVRC12 to test the cross-dataset accuracy.
1295
+ Competitors.
1296
+ For
1297
+ WebVision,
1298
+ we
1299
+ compare
1300
+ with
1301
+ CE
1302
+ that
1303
+ trains
1304
+ with
1305
+ cross-entropy
1306
+ loss
1307
+ (CE),
1308
+ as
1309
+ well
1310
+ as
1311
+ Decoupling [68], D2L [69], MentorNet [12], Co-teaching [11],
1312
+ Iterative-CV [13], and DivideMix [49]. For clothing1M, we
1313
+
1314
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
1315
+ 10
1316
+ TABLE 1
1317
+ Test accuracies(%) on several benchmark datasets with different settings.
1318
+ Dataset
1319
+ Method
1320
+ Sym. Noise Rate
1321
+ Asy. Noise Rate
1322
+ 0.2
1323
+ 0.4
1324
+ 0.6
1325
+ 0.8
1326
+ 0.2
1327
+ 0.3
1328
+ 0.4
1329
+ CIFAR-10
1330
+ Standard
1331
+ 85.7 ± 0.5
1332
+ 81.8 ± 0.6
1333
+ 73.7 ± 1.1
1334
+ 42.0 ± 2.8
1335
+ 88.0 ± 0.3
1336
+ 86.4 ± 0.4
1337
+ 84.9 ± 0.7
1338
+ Forgetting
1339
+ 86.0 ± 0.8
1340
+ 82.1 ± 0.7
1341
+ 75.5 ± 0.7
1342
+ 41.3 ± 3.3
1343
+ 89.5 ± 0.2
1344
+ 88.2 ± 0.1
1345
+ 85.0 ± 1.0
1346
+ Bootstrap
1347
+ 86.4 ± 0.6
1348
+ 82.5 ± 0.1
1349
+ 75.2 ± 0.8
1350
+ 42.1 ± 3.3
1351
+ 88.8 ± 0.5
1352
+ 87.5 ± 0.5
1353
+ 85.1 ± 0.3
1354
+ Forward
1355
+ 85.7 ± 0.4
1356
+ 81.0 ± 0.4
1357
+ 73.3 ± 1.1
1358
+ 31.6 ± 4.0
1359
+ 88.5 ± 0.4
1360
+ 87.3 ± 0.2
1361
+ 85.3 ± 0.6
1362
+ Decoupling
1363
+ 87.4 ± 0.3
1364
+ 83.3 ± 0.4
1365
+ 73.8 ± 1.0
1366
+ 36.0 ± 3.2
1367
+ 89.3 ± 0.3
1368
+ 88.1 ± 0.4
1369
+ 85.1 ± 1.0
1370
+ MentorNet
1371
+ 88.1 ± 0.3
1372
+ 81.4 ± 0.5
1373
+ 70.4 ± 1.1
1374
+ 31.3 ± 2.9
1375
+ 86.3 ± 0.4
1376
+ 84.8 ± 0.3
1377
+ 78.7 ± 0.4
1378
+ Co-teaching
1379
+ 89.2 ± 0.3
1380
+ 86.4 ± 0.4
1381
+ 79.0 ± 0.2
1382
+ 22.9 ± 3.5
1383
+ 90.0 ± 0.2
1384
+ 88.2 ± 0.1
1385
+ 78.4 ± 0.7
1386
+ Co-teaching+
1387
+ 89.8 ± 0.2
1388
+ 86.1 ± 0.2
1389
+ 74.0 ± 0.2
1390
+ 17.9 ± 1.1
1391
+ 89.4 ± 0.2
1392
+ 87.1 ± 0.5
1393
+ 71.3 ± 0.8
1394
+ IterNLD
1395
+ 87.9 ± 0.4
1396
+ 83.7 ± 0.4
1397
+ 74.1 ± 0.5
1398
+ 38.0 ± 1.9
1399
+ 89.3 ± 0.3
1400
+ 88.8 ± 0.5
1401
+ 85.0 ± 0.4
1402
+ RoG
1403
+ 89.2 ± 0.3
1404
+ 83.5 ± 0.4
1405
+ 77.9 ± 0.6
1406
+ 29.1 ± 1.8
1407
+ 89.6 ± 0.4
1408
+ 88.4 ± 0.5
1409
+ 86.2 ± 0.6
1410
+ PENCIL
1411
+ 88.2 ± 0.2
1412
+ 86.6 ± 0.3
1413
+ 74.3 ± 0.6
1414
+ 45.3 ± 1.4
1415
+ 90.2 ± 0.2
1416
+ 88.3 ± 0.2
1417
+ 84.5 ± 0.5
1418
+ GCE
1419
+ 88.7 ± 0.3
1420
+ 84.7 ± 0.4
1421
+ 76.1 ± 0.3
1422
+ 41.7 ± 1.0
1423
+ 88.1 ± 0.3
1424
+ 86.0 ± 0.4
1425
+ 81.4 ± 0.6
1426
+ SL
1427
+ 89.2 ± 0.5
1428
+ 85.3 ± 0.7
1429
+ 78.0 ± 0.3
1430
+ 44.4 ± 1.1
1431
+ 88.7 ± 0.3
1432
+ 86.3 ± 0.1
1433
+ 81.4 ± 0.7
1434
+ TopoFilter
1435
+ 90.2 ± 0.2
1436
+ 87.2 ± 0.4
1437
+ 80.5 ± 0.4
1438
+ 45.7 ± 1.0
1439
+ 90.5 ± 0.2
1440
+ 89.7 ± 0.3
1441
+ 87.9 ± 0.2
1442
+ SPR
1443
+ 92.0 ± 0.1
1444
+ 94.6 ± 0.2
1445
+ 91.6 ± 0.2
1446
+ 80.5 ± 0.6
1447
+ 89.0 ± 0.8
1448
+ 90.3 ± 0.8
1449
+ 91.0 ± 0.6
1450
+ Knockoffs-SPR
1451
+ 95.4 ± 0.1
1452
+ 94.5 ± 0.1
1453
+ 93.3 ± 0.1
1454
+ 84.6 ± 0.8
1455
+ 95.1 ± 0.1
1456
+ 94.5 ± 0.2
1457
+ 93.6 ± 0.2
1458
+ CIFAR-100
1459
+ Standard
1460
+ 56.5 ± 0.7
1461
+ 50.4 ± 0.8
1462
+ 38.7 ± 1.0
1463
+ 18.4 ± 0.5
1464
+ 57.3 ± 0.7
1465
+ 52.2 ± 0.4
1466
+ 42.3 ± 0.7
1467
+ Forgetting
1468
+ 56.5 ± 0.7
1469
+ 50.6 ± 0.9
1470
+ 38.7 ± 1.0
1471
+ 18.4 ± 0.4
1472
+ 57.5 ± 1.1
1473
+ 52.4 ± 0.8
1474
+ 42.4 ± 0.8
1475
+ Bootstrap
1476
+ 56.2 ± 0.5
1477
+ 50.8 ± 0.6
1478
+ 37.7 ± 0.8
1479
+ 19.0 ± 0.6
1480
+ 57.1 ± 0.9
1481
+ 53.0 ± 0.9
1482
+ 43.0 ± 1.0
1483
+ Forward
1484
+ 56.4 ± 0.4
1485
+ 49.7 ± 1.3
1486
+ 38.0 ± 1.5
1487
+ 12.8 ± 1.3
1488
+ 56.8 ± 1.0
1489
+ 52.7 ± 0.5
1490
+ 42.0 ± 1.0
1491
+ Decoupling
1492
+ 57.8 ± 0.4
1493
+ 49.9 ± 1.0
1494
+ 37.8 ± 0.7
1495
+ 17.0 ± 0.7
1496
+ 60.2 ± 0.9
1497
+ 54.9 ± 0.1
1498
+ 47.2 ± 0.9
1499
+ MentorNet
1500
+ 62.9 ± 1.2
1501
+ 52.8 ± 0.7
1502
+ 36.0 ± 1.5
1503
+ 15.1 ± 0.9
1504
+ 62.3 ± 1.3
1505
+ 55.3 ± 0.5
1506
+ 44.4 ± 1.6
1507
+ Co-teaching
1508
+ 64.8 ± 0.2
1509
+ 60.3 ± 0.4
1510
+ 46.8 ± 0.7
1511
+ 13.3 ± 2.8
1512
+ 63.6 ± 0.4
1513
+ 58.3 ± 1.1
1514
+ 48.9 ± 0.8
1515
+ Co-teaching+
1516
+ 64.2 ± 0.4
1517
+ 53.1 ± 0.2
1518
+ 25.3 ± 0.5
1519
+ 10.1 ± 1.2
1520
+ 60.9 ± 0.3
1521
+ 56.8 ± 0.5
1522
+ 48.6 ± 0.4
1523
+ IterNLD
1524
+ 57.9 ± 0.4
1525
+ 51.2 ± 0.4
1526
+ 38.1 ± 0.9
1527
+ 15.5 ± 0.8
1528
+ 58.1 ± 0.4
1529
+ 53.0 ± 0.3
1530
+ 43.5 ± 0.8
1531
+ RoG
1532
+ 63.1 ± 0.3
1533
+ 58.2 ± 0.5
1534
+ 47.4 ± 0.8
1535
+ 20.0 ± 0.9
1536
+ 67.1 ± 0.6
1537
+ 65.6 ± 0.4
1538
+ 58.8 ± 0.1
1539
+ PENCIL
1540
+ 64.9 ± 0.3
1541
+ 61.3 ± 0.4
1542
+ 46.6 ± 0.7
1543
+ 17.3 ± 0.8
1544
+ 67.5 ± 0.5
1545
+ 66.0 ± 0.4
1546
+ 61.9 ± 0.4
1547
+ GCE
1548
+ 63.6 ± 0.6
1549
+ 59.8 ± 0.5
1550
+ 46.5 ± 1.3
1551
+ 17.0 ± 1.1
1552
+ 64.8 ± 0.9
1553
+ 61.4 ± 1.1
1554
+ 50.4 ± 0.9
1555
+ SL
1556
+ 62.1 ± 0.4
1557
+ 55.6 ± 0.6
1558
+ 42.7 ± 0.8
1559
+ 19.5 ± 0.7
1560
+ 59.2 ± 0.6
1561
+ 55.1 ± 0.7
1562
+ 44.8 ± 0.1
1563
+ TopoFilter
1564
+ 65.6 ± 0.3
1565
+ 62.0 ± 0.6
1566
+ 47.7 ± 0.5
1567
+ 20.7 ± 1.2
1568
+ 68.0 ± 0.3
1569
+ 66.7 ± 0.6
1570
+ 62.4 ± 0.2
1571
+ SPR
1572
+ 72.5 ± 0.2
1573
+ 75.0 ± 0.1
1574
+ 70.9 ± 0.3
1575
+ 38.1 ± 0.8
1576
+ 71.9 ± 0.2
1577
+ 72.4 ± 0.3
1578
+ 70.9 ± 0.5
1579
+ Knockoffs-SPR
1580
+ 77.5 ± 0.2
1581
+ 74.3 ± 0.2
1582
+ 67.8 ± 0.4
1583
+ 30.5 ± 1.0
1584
+ 77.3 ± 0.4
1585
+ 76.3 ± 0.3
1586
+ 73.9 ± 0.6
1587
+ TABLE 2
1588
+ Test accuracies(%) on WebVision and ILSVRC12 (trained on
1589
+ WebVision).
1590
+ Method
1591
+ WebVision
1592
+ ILSVRC12
1593
+ top1
1594
+ top5
1595
+ top1
1596
+ top5
1597
+ F-correction
1598
+ 61.12
1599
+ 82.68
1600
+ 57.36
1601
+ 82.36
1602
+ Decoupling
1603
+ 62.54
1604
+ 84.74
1605
+ 58.26
1606
+ 82.26
1607
+ D2L
1608
+ 62.68
1609
+ 84.00
1610
+ 57.80
1611
+ 81.36
1612
+ MentorNet
1613
+ 63.00
1614
+ 81.40
1615
+ 57.80
1616
+ 79.92
1617
+ Co-teaching
1618
+ 63.58
1619
+ 85.20
1620
+ 61.48
1621
+ 84.70
1622
+ Iterative-CV
1623
+ 65.24
1624
+ 85.34
1625
+ 61.60
1626
+ 84.98
1627
+ DivideMix
1628
+ 77.32
1629
+ 91.64
1630
+ 75.20
1631
+ 90.84
1632
+ SPR
1633
+ 77.08
1634
+ 91.40
1635
+ 72.32
1636
+ 90.92
1637
+ Knockoffs-SPR
1638
+ 78.20
1639
+ 92.36
1640
+ 74.72
1641
+ 92.88
1642
+ compare with F-correction [55], M-correction [56], Joint-
1643
+ Optim [48], Meta-Cleaner [70], Meta-Learning [71], P-
1644
+ correction [59], TopoFilter [18] and DivideMix [49].
1645
+ The results of real-world datasets are shown in Table 3
1646
+ and Table 2, where the results of competitors are reported
1647
+ in [49]. Our algorithm Knockoffs-SPR enjoys superior
1648
+ performance to almost all the competitors, showing the
1649
+ ability of handling real-world challenges. Compared with
1650
+ SPR, Knockoffs-SPR also achieves better performance,
1651
+ indicating the beneficial of FSR control in real-world
1652
+ problems of learning with noisy labels.
1653
+ TABLE 3
1654
+ Test accuracies(%) on Clothing1M.
1655
+ Method
1656
+ Accuracy
1657
+ Cross-Entropy
1658
+ 69.21
1659
+ F-correction
1660
+ 69.84
1661
+ M-correction
1662
+ 71.00
1663
+ Joint-Optim
1664
+ 72.16
1665
+ Meta-Cleaner
1666
+ 72.50
1667
+ Meta-Learning
1668
+ 73.47
1669
+ P-correction
1670
+ 73.49
1671
+ TopoFiler
1672
+ 74.10
1673
+ DivideMix
1674
+ 74.76
1675
+ SPR
1676
+ 71.16
1677
+ Knockoffs-SPR
1678
+ 75.25
1679
+ 6.3
1680
+ Evaluation of Sample Selection Quality
1681
+ To test whether Knockoffs-SPR leads to better sample
1682
+ selection quality, we test the following statistics on CIFAR-
1683
+ 10 with different noise scenarios, including Sym. 40%, Sym.
1684
+ 80%, and Asy. 40%. (1) FSR: the ratio of falsely selected
1685
+ noisy data in the estimated clean data, which is the target
1686
+ that Knockoffs-SPR aims to control; (2) Recall: the ratio of
1687
+ selected ground-truth clean data in the full ground-truth
1688
+ clean data, which indicates the power of sample selection
1689
+ algorithms; (3) F1-score: the harmonic mean of precision (1-
1690
+ FSR) and recall, which measures the balanced performance
1691
+ of FSR control and power. We plot the corresponding
1692
+
1693
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
1694
+ 11
1695
+ 0
1696
+ 25
1697
+ 50
1698
+ 75
1699
+ 100
1700
+ 125
1701
+ 150
1702
+ 175
1703
+ 0
1704
+ 2
1705
+ 4
1706
+ 6
1707
+ 8
1708
+ 10
1709
+ 12
1710
+ 14
1711
+ 16
1712
+ FSR
1713
+ Symmetric-40%
1714
+ 0
1715
+ 25
1716
+ 50
1717
+ 75
1718
+ 100
1719
+ 125
1720
+ 150
1721
+ 175
1722
+ 0
1723
+ 10
1724
+ 20
1725
+ 30
1726
+ 40
1727
+ 50
1728
+ 60
1729
+ 70
1730
+ 80
1731
+ Symmetric-80%
1732
+ 0
1733
+ 25
1734
+ 50
1735
+ 75
1736
+ 100
1737
+ 125
1738
+ 150
1739
+ 175
1740
+ 0
1741
+ 2
1742
+ 4
1743
+ 6
1744
+ 8
1745
+ 10
1746
+ 12
1747
+ 14
1748
+ 16
1749
+ 18
1750
+ 20
1751
+ Asymmetric-40%
1752
+ Knockoff-SPR
1753
+ Estimated q
1754
+ SPR
1755
+ TopoFilter
1756
+ 0
1757
+ 25
1758
+ 50
1759
+ 75
1760
+ 100
1761
+ 125
1762
+ 150
1763
+ 175
1764
+ 60
1765
+ 65
1766
+ 70
1767
+ 75
1768
+ 80
1769
+ 85
1770
+ 90
1771
+ 95
1772
+ 100
1773
+ Recall
1774
+ 0
1775
+ 25
1776
+ 50
1777
+ 75
1778
+ 100
1779
+ 125
1780
+ 150
1781
+ 175
1782
+ 0
1783
+ 10
1784
+ 20
1785
+ 30
1786
+ 40
1787
+ 50
1788
+ 60
1789
+ 70
1790
+ 80
1791
+ 90
1792
+ 100
1793
+ 0
1794
+ 25
1795
+ 50
1796
+ 75
1797
+ 100
1798
+ 125
1799
+ 150
1800
+ 175
1801
+ 55
1802
+ 60
1803
+ 65
1804
+ 70
1805
+ 75
1806
+ 80
1807
+ 85
1808
+ 90
1809
+ 95
1810
+ 100
1811
+ 0
1812
+ 20
1813
+ 40
1814
+ 60
1815
+ 80
1816
+ 100
1817
+ 120
1818
+ 140
1819
+ 160
1820
+ 180
1821
+ Training Epochs
1822
+ 70
1823
+ 75
1824
+ 80
1825
+ 85
1826
+ 90
1827
+ 95
1828
+ 100
1829
+ F score
1830
+ 0
1831
+ 20
1832
+ 40
1833
+ 60
1834
+ 80
1835
+ 100
1836
+ 120
1837
+ 140
1838
+ 160
1839
+ 180
1840
+ Training Epochs
1841
+ 10
1842
+ 20
1843
+ 30
1844
+ 40
1845
+ 50
1846
+ 60
1847
+ 70
1848
+ 80
1849
+ 0
1850
+ 20
1851
+ 40
1852
+ 60
1853
+ 80
1854
+ 100
1855
+ 120
1856
+ 140
1857
+ 160
1858
+ 180
1859
+ Training Epochs
1860
+ 65
1861
+ 70
1862
+ 75
1863
+ 80
1864
+ 85
1865
+ 90
1866
+ 95
1867
+ 100
1868
+ Fig. 3. Performance(%) comparison on sample selection along the
1869
+ training path on CIFAR 10 with different noise scenarios. In the FSR,
1870
+ we also visualize the estimated FSR (q) by Knockoffs-SPR, which is the
1871
+ threshold we use to select clean data.
1872
+ statistics of each algorithm along the training epochs in
1873
+ Fig. 3. We further visualize the estimated FSR, q, of
1874
+ Knockoffs-SPR to compare with the ground-truth FSR. As
1875
+ we use the splitting algorithm, where each piece contains
1876
+ 10 classes with each class containing a subset of data, we
1877
+ estimate FSR for each piece and report their average and
1878
+ standard deviation.
1879
+ FSR control in practice. (1) When the noise rate is not
1880
+ high, for example in Sym. 40% and Asy. 40% scenarios, the
1881
+ ground-truth FSR is well upper-bounded by the estimated
1882
+ FSR (with no larger than a single standard deviation). When
1883
+ the noise rate is high, for example in Sym. 80% noise
1884
+ scenario, the FSR cannot get controlled in the early stage.
1885
+ However, as the training goes on, FSR can be well-bounded
1886
+ by Knockoffs-SPR.
1887
+ (2) When the training set is not very noisy, for example in
1888
+ Sym. 40% scenario, the true FSR is far below the estimated
1889
+ q. This gap can be explained by a good estimation of
1890
+ β due to the small noisy rate. When ˆβ1 can accurately
1891
+ estimate β∗, the ˜γ∗
1892
+ 2,i dominate in Eq. (15). Therefore, the
1893
+ P(Wi > 0|˜y2,i is clean) > 1
1894
+ 2, making P(Wi > 0) > 1/2 >
1895
+ c−2
1896
+ 2(c−1). Since the true FSR bound is inversely proportional to
1897
+ P(Wi > 0) (FSR ∝ maxi∈Cc 1/P(Wi > 0) − 1), it is smaller
1898
+ than the theoretical bound q.
1899
+ Sample selection quality comparison. We compare the
1900
+ sample selection quality of Knockoffs-SPR with SPR and
1901
+ TopoFilter [18]. (1) Knockoffs-SPR enjoys the (almost) best
1902
+ FSR control capacity in all noise scenarios, especially in the
1903
+ high noise rate setting. Other algorithms can suffer from
1904
+ failure in controlling the FSR (for example in Sym. 80%
1905
+ scenario). (2) The power of Knockoffs-SPR is comparable to
1906
+ the best algorithms in Sym. 40% and Asy. 40% scenarios. For
1907
+ the Sym. 80% case, Knockoffs-SPR sacrifices some power for
1908
+ FSR control. (3) Compared together, Knockoffs-SPR enjoys
1909
+ the best F1 score on sample selection quality, which well-
1910
+ establishes its superiority in selecting clean data with FSR
1911
+ control.
1912
+ TABLE 4
1913
+ Ablation(%) of Knockoffs-SPR on CIFAR-10.
1914
+ SPR
1915
+ ∗-random
1916
+ ∗-multi
1917
+ ∗-noPCA
1918
+ Knockoffs-SPR
1919
+ Sym. 40%
1920
+ Acc.
1921
+ 94.0
1922
+ 92.0
1923
+ 94.4
1924
+ 81.7
1925
+ 94.7
1926
+ FSR
1927
+ 0.82
1928
+ 23.04
1929
+ 1.31
1930
+ 11.51
1931
+ 1.27
1932
+ q
1933
+ -
1934
+ 4.31±0.73
1935
+ 2.00±0.00
1936
+ 14.18±7.62
1937
+ 5.59±1.11
1938
+ Sym. 80%
1939
+ Acc.
1940
+ 78.0
1941
+ 84.6
1942
+ 83.0
1943
+ 10.0
1944
+ 84.3
1945
+ FSR
1946
+ 60.47
1947
+ 49.76
1948
+ 25.77
1949
+ 78.06
1950
+ 26.72
1951
+ q
1952
+ -
1953
+ 9.47±4.39
1954
+ 2.22±0.62
1955
+ 25.95±11.88
1956
+ 19.52±12.77
1957
+ Asy. 40%
1958
+ Acc.
1959
+ 89.5
1960
+ 84.4
1961
+ 93.4
1962
+ 93.7
1963
+ 93.5
1964
+ FSR
1965
+ 2.19
1966
+ 16.94
1967
+ 2.97
1968
+ 7.62
1969
+ 2.84
1970
+ q
1971
+ -
1972
+ 4.15±2.59
1973
+ 2.00±0.00
1974
+ 5.22±2.85
1975
+ 4.45±2.68
1976
+ 6.4
1977
+ Further Analysis
1978
+ Influence
1979
+ of
1980
+ Knockoffs-SPR
1981
+ strategies.
1982
+ We
1983
+ compare
1984
+ Knockoffs-SPR
1985
+ with
1986
+ several
1987
+ variants,
1988
+ including:
1989
+ SPR
1990
+ (The original SPR algorithm), ∗-random (Knockoffs-SPR
1991
+ with
1992
+ randomly
1993
+ permuted
1994
+ labels),
1995
+ ∗-multi
1996
+ (Knockoffs-
1997
+ SPR without class-specific selection), ∗-noPCA (Knockoffs-
1998
+ SPR without using PCA to pre-process the features).
1999
+ Experiments are conducted on CIFAR-10 with different
2000
+ noise scenarios, as in Table 4. We observe the following
2001
+ results:
2002
+ (1) As also shown in Fig. 3, the SPR can control the FSR in
2003
+ Sym. 40% and Asy. 40% but fails in Sym. 80%. This may
2004
+ be due to that when the noisy pattern is not significant,
2005
+ the collinearity is weak between noisy samples and clean
2006
+ ones, as shown by the distribution of irrepresentable value
2007
+ {∥(X⊤
2008
+ S XS)−1X⊤
2009
+ S Xj∥1}j∈Sc
2010
+ in Fig. 1 in the appendix.
2011
+ In this regard, most of the earlier (resp. later) selected
2012
+ samples in the solution path tend to be noisy (resp. clean)
2013
+ samples. When there is strong multi-collinearity and the
2014
+ irrepresentable condition is violated seriously, our proposed
2015
+ Knockoff procedure can help to control the FSR. The higher
2016
+ accuracy of Knockoffs-SPR over SPR can be explained by
2017
+ consistent improvements in terms of the F1 score of sample
2018
+ selection capacity, as shown in Fig. 3.
2019
+ (2) Compared with the random permutation strategy,
2020
+ Knockoffs-SPR with most-confident permutation enjoys
2021
+ much better FSR control and works much better in Sym.
2022
+ 40% and Asy. 40% noise scenarios. In Sym. 80% noise
2023
+ scenario, the accuracy is comparable, but the most-confident
2024
+ permutation still enjoys much better FSR control. This
2025
+ result empirically demonstrates the advantage of the most-
2026
+ confident permutation over the random permutation.
2027
+ (3) Running Knockoffs-SPR on each class separately is
2028
+ beneficial for the FSR control capacity and recognition
2029
+ capacity. When the noise rate is high, running Knockoffs-
2030
+ SPR on multiple classes cannot control the FSR properly by
2031
+ the estimated q.
2032
+ (4) Using PCA on features as the pre-processing is beneficial
2033
+ for FSR control in all cases and will increase the recognition
2034
+ capacity in some cases, especially when the noise rate is
2035
+ high.
2036
+
2037
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2038
+ 12
2039
+ TABLE 5
2040
+ Ablation of the splitting algorithm in computation efficiency (for one
2041
+ epoch) on CIFAR-10.
2042
+ Model
2043
+ Training Time
2044
+ Knockoffs-SPR w/o split algorithm
2045
+ about 6h
2046
+ Knockoffs-SPR w/ split algorithm
2047
+ 66s
2048
+ TABLE 6
2049
+ Ablation(%) of training strategies on CIFAR-10.
2050
+ Method
2051
+ Sym. 40%
2052
+ Sym. 80%
2053
+ Asy. 40%
2054
+ Knockoffs-SPR - Self
2055
+ 92.5
2056
+ 24.3
2057
+ 92.2
2058
+ Knockoffs-SPR - Semi
2059
+ 91.3
2060
+ 54.0
2061
+ 88.5
2062
+ Knockoffs-SPR - EMA
2063
+ 94.5
2064
+ 83.8
2065
+ 93.2
2066
+ Knockoffs-SPR
2067
+ 94.7
2068
+ 84.3
2069
+ 93.5
2070
+ Influence of Scalable. In our framework, we propose a split
2071
+ algorithm to divide the whole training set into small pieces
2072
+ to run Knockoffs-SPR in parallel. In this part, we compare
2073
+ the running time between using the split algorithm and not
2074
+ using it. Results are shown in Tab. 5. We can see that the
2075
+ splitting algorithm can significantly reduce the computation
2076
+ time. This is important in large-scale applications.
2077
+ Influence of network training strategies. To better train
2078
+ the network, we adopt the self-supervised pre-trained
2079
+ backbone and the semi-supervised learning framework with
2080
+ an EMA update model. In this part, we test the influence of
2081
+ these strategies on CIFAR-10 with different noise scenarios.
2082
+ Concretely, we compare the full framework with Knockoffs-
2083
+ SPR - Self which uses a randomly initialized backbone,
2084
+ Knockoffs-SPR - Semi which uses supervised training, and
2085
+ Knockoffs-SPR - EMA which does not use the EMA update
2086
+ model. Results are summarized in table. 6. We can find
2087
+ that: (1) The self-supervised pre-training is important for
2088
+ high noise rate scenarios, while for other settings, it is
2089
+ not so essential; (2) Semi-supervised training consistently
2090
+ improves the recognition capacity, indicating the utility of
2091
+ leveraging the support of noisy data; (3) The EMA model
2092
+ will slightly improve the recognition capacity.
2093
+ airplane
2094
+ frog
2095
+ automobile
2096
+ truck
2097
+ bird
2098
+ deer
2099
+ cat
2100
+ dog
2101
+ deer
2102
+ cat
2103
+ dog
2104
+ cat
2105
+ frog
2106
+ deer
2107
+ horse
2108
+ dog
2109
+ ship
2110
+ airplane
2111
+ truck
2112
+ ship
2113
+ Fig. 4. Qualitative results of falsely selected examples by Knockoffs-
2114
+ SPR. The black words are the labeled classes while the real classes
2115
+ are denoted by red words.
2116
+ Qualitative visualization. We randomly visualize some
2117
+ falsely selected examples of CIFAR-10 in Fig. 4. Most of
2118
+ these cases have some patterns that confuse the noisy
2119
+ label and the true label, thus making Knockoffs-SPR falsely
2120
+ identify them as clean samples.
2121
+ 7
2122
+ CONCLUSION
2123
+ This
2124
+ paper
2125
+ proposes
2126
+ a
2127
+ statistical
2128
+ sample
2129
+ selection
2130
+ framework – Scalable Penalized Regression with Knockoff
2131
+ Filters (Knockoffs-SPR) to identify noisy data with a
2132
+ controlled false selection rate. Specifically, we propose an
2133
+ equivalent leave-one-out t-test approach as a penalized
2134
+ linear model, in which non-zero mean-shift parameters can
2135
+ be induced as an indicator for noisy data. We propose a
2136
+ delicate Knockoff-SPR algorithm to identify clean samples
2137
+ in a way that the false selection rate is controlled by
2138
+ the user-provided upper bound. Such an upper bound is
2139
+ proved theoretically and works well in empirical results.
2140
+ Experiments on several synthetic and real-world datasets
2141
+ demonstrate the effectiveness of Knockoff-SPR.
2142
+ REFERENCES
2143
+ [1]
2144
+ C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals,
2145
+ “Understanding
2146
+ deep
2147
+ learning
2148
+ requires
2149
+ rethinking
2150
+ generalization,” in ICLR, 2017. (document)
2151
+ [2]
2152
+ T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from
2153
+ massive noisy labeled data for image classification,” in CVPR,
2154
+ 2015. (document), 5.1, 6
2155
+ [3]
2156
+ J. Goldberger and E. Ben-Reuven, “Training deep neural-networks
2157
+ using a noise adaptation layer,” in ICLR, 2017. (document), 5.1
2158
+ [4]
2159
+ X.
2160
+ Chen
2161
+ and
2162
+ A.
2163
+ Gupta,
2164
+ “Webly
2165
+ supervised
2166
+ learning
2167
+ of
2168
+ convolutional networks,” in ICCV, 2015. (document), 5.1
2169
+ [5]
2170
+ B. Han, J. Yao, G. Niu, M. Zhou, I. W. Tsang, Y. Zhang, and
2171
+ M. Sugiyama, “Masking: a new perspective of noisy supervision,”
2172
+ in NeurIPS, 2018. (document), 5.1
2173
+ [6]
2174
+ A. Ghosh, H. Kumar, and P. Sastry, “Robust loss functions under
2175
+ label noise for deep neural networks,” in AAAI, 2017. (document),
2176
+ 5.1
2177
+ [7]
2178
+ Z. Zhang and M. R. Sabuncu, “Generalized cross entropy loss
2179
+ for training deep neural networks with noisy labels,” in NeurIPS,
2180
+ 2018. (document), 5.1, 6.1
2181
+ [8]
2182
+ Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, and J. Bailey, “Symmetric
2183
+ cross entropy for robust learning with noisy labels,” in ICCV, 2019.
2184
+ (document), 5.1, 6.1
2185
+ [9]
2186
+ Y. Lyu and I. W. Tsang, “Curriculum loss: Robust learning
2187
+ and generalization against label corruption,” in ICLR, 2020.
2188
+ (document), 5.1
2189
+ [10] H. Song, M. Kim, and J.-G. Lee, “Selfie: Refurbishing unclean
2190
+ samples for robust deep learning,” in ICML, 2019. (document)
2191
+ [11] B.
2192
+ Han,
2193
+ Q.
2194
+ Yao,
2195
+ X.
2196
+ Yu,
2197
+ G.
2198
+ Niu,
2199
+ M.
2200
+ Xu,
2201
+ W.
2202
+ Hu,
2203
+ I.
2204
+ W.
2205
+ Tsang, and M. Sugiyama, “Co-teaching: Robust training of deep
2206
+ neural networks with extremely noisy labels,” in NeurIPS, 2018.
2207
+ (document), 6.1, 6.2
2208
+ [12] L. Jiang, Z. Zhou, T. Leung, L.-J. Li, and L. Fei-Fei, “Mentornet:
2209
+ Learning data-driven curriculum for very deep neural networks
2210
+ on corrupted labels,” in ICML, 2018. (document), 6.1, 6.2
2211
+ [13] P. Chen, B. B. Liao, G. Chen, and S. Zhang, “Understanding
2212
+ and utilizing deep neural networks trained with noisy labels,” in
2213
+ ICML, 2019. (document), 6.2
2214
+ [14] Y. Shen and S. Sanghavi, “Learning with bad training data via
2215
+ iterative trimmed loss minimization,” in ICML, 2019. (document),
2216
+ 5.1
2217
+
2218
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2219
+ 13
2220
+ [15] X. Yu, B. Han, J. Yao, G. Niu, I. Tsang, and M. Sugiyama, “How
2221
+ does disagreement help generalization against label corruption?”
2222
+ in ICML, 2019. (document), 5.1, 6.1
2223
+ [16] D. T. Nguyen, C. K. Mummadi, T. P. N. Ngo, T. H. P. Nguyen,
2224
+ L. Beggel, and T. Brox, “Self: Learning to filter noisy labels with
2225
+ self-ensembling,” in ICLR, 2020. (document)
2226
+ [17] T. Zhou, S. Wang, and J. Bilmes, “Robust curriculum learning:
2227
+ From clean label detection to noisy label self-correction,” in ICLR,
2228
+ 2021. (document), 5.1
2229
+ [18] P. Wu, S. Zheng, M. Goswami, D. N. Metaxas, and C. Chen, “A
2230
+ topological filter for learning with label noise,” NeurIPS, 2020.
2231
+ (document), 5.1, 6.1, 6.2, 6.3
2232
+ [19] W. Sanford, “Applied linear regression,” John Wiley & Sons, 1985.
2233
+ (document)
2234
+ [20] Y. She and A. B. Owen, “Outlier detection using nonconvex
2235
+ penalized regression,” Journal of the American Statistical Association,
2236
+ 2011. (document), 5.2
2237
+ [21] J. Neyman and E. L. Scott, “Consistent estimates based on partially
2238
+ consistent observations,” Econometrica: Journal of the Econometric
2239
+ Society, 1948. (document), 5.2
2240
+ [22] J. Kiefer and J. Wolfowitz, “Consistency of the maximum
2241
+ likelihood
2242
+ estimator
2243
+ in
2244
+ the
2245
+ presence
2246
+ of
2247
+ infinitely
2248
+ many
2249
+ incidental parameters,” The Annals of Mathematical Statistics, 1956.
2250
+ (document), 5.2
2251
+ [23] D. Basu, “On the elimination of nuisance parameters,” in Selected
2252
+ Works of Debabrata Basu, 2011. (document), 5.2
2253
+ [24] M. Moreira, “A maximum likelihood method for the incidental
2254
+ parameter problem,” National Bureau of Economic Research,
2255
+ Tech. Rep., 2008. (document), 5.2
2256
+ [25] J. Fan, R. Tang, and X. Shi, “Partial consistency with sparse
2257
+ incidental parameters,” Statistica Sinica, 2018. (document), 5.2
2258
+ [26] Y. Fu, T. M. Hospedales, T. Xiang, J. Xiong, S. Gong, Y. Wang,
2259
+ and Y. Yao, “Robust subjective visual property prediction from
2260
+ crowdsourced pairwise labels,” IEEE Transactions on Pattern
2261
+ Analysis and Machine Intelligence, 2015. (document), 5.2
2262
+ [27] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust
2263
+ face recognition via sparse representation,” IEEE Transactions on
2264
+ Pattern Analysis and Machine Intelligence, 2009. (document), 5.2
2265
+ [28] Y. Wang, C. Xu, C. Liu, L. Zhang, and Y. Fu, “Instance credibility
2266
+ inference for few-shot learning,” in CVPR, 2020. (document), 5.2
2267
+ [29] Y. Wang, L. Zhang, Y. Yao, and Y. Fu, “How to trust unlabeled
2268
+ data? instance credibility inference for few-shot learning,” IEEE
2269
+ Transactions on Pattern Analysis and Machine Intelligence, 2021.
2270
+ (document), 5.2
2271
+ [30] E. Simpson and I. Gurevych, “Scalable bayesian preference
2272
+ learning for crowds,” Machine Learning, 2020. (document), 5.2
2273
+ [31] Y. Wang, X. Sun, and Y. Fu, “Scalable penalized regression
2274
+ for noise detection in learning with noisy labels,” in IEEE/CVF
2275
+ Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
2276
+ (document)
2277
+ [32] M.
2278
+ J.
2279
+ Wainwright,
2280
+ “Sharp
2281
+ thresholds
2282
+ for
2283
+ high-dimensional
2284
+ and noisy sparsity recovery using ℓ1 -constrained quadratic
2285
+ programming (lasso),” IEEE transactions on information theory, 2009.
2286
+ (document), 2.3, 9.1
2287
+ [33] P. Zhao and B. Yu, “On model selection consistency of lasso,”
2288
+ Journal of Machine learning research, 2006. (document), 2.3
2289
+ [34] R. F. Barber and E. J. Cand`es, “Controlling the false discovery rate
2290
+ via knockoffs,” The Annals of Statistics, vol. 43, no. 5, pp. 2055–2085,
2291
+ 2015. (document), 3.1, 5.3
2292
+ [35] R. Dai and R. Barber, “The knockoff filter for fdr control in group-
2293
+ sparse and multitask regression,” in International conference on
2294
+ machine learning.
2295
+ PMLR, 2016, pp. 1851–1859. (document), 3.1,
2296
+ 5.3
2297
+ [36] R.
2298
+ F.
2299
+ Barber
2300
+ and
2301
+ E.
2302
+ J.
2303
+ Cand`es,
2304
+ “A
2305
+ knockoff
2306
+ filter
2307
+ for
2308
+ high-dimensional selective inference,” The Annals of Statistics,
2309
+ vol. 47, no. 5, pp. 2504 – 2537, 2019. [Online]. Available:
2310
+ https://doi.org/10.1214/18-AOS1755 (document), 3.1, 5.3, 8, 8
2311
+ [37] Y. Cao, X. Sun, and Y. Yao, “Controlling the false discovery
2312
+ rate in transformational sparsity: Split knockoffs,” arXiv preprint
2313
+ arXiv:2103.16159, 2021. (document), 3.1, 5.3, 8, 8, 8
2314
+ [38] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix:
2315
+ Regularization strategy to train strong classifiers with localizable
2316
+ features,” in ICCV, 2019. (document), 4.3
2317
+ [39] N. Simon, J. Friedman, and T. Hastie, “A blockwise descent
2318
+ algorithm for group-penalized multiresponse and multinomial
2319
+ regression,” arXiv preprint arXiv:1311.6529, 2013. 2.2
2320
+ [40] Q. Xu, J. Xiong, X. Cao, Q. Huang, and Y. Yao, “Evaluating visual
2321
+ properties via robust hodgerank,” International Journal of Computer
2322
+ Vision, pp. 1–22, 2021. 2.3
2323
+ [41] Q. Xu, J. Xiong, X. Cao, and Y. Yao, “False discovery rate control
2324
+ and statistical quality assessment of annotators in crowdsourced
2325
+ ranking,” in International conference on machine learning.
2326
+ PMLR,
2327
+ 2016, pp. 1282–1291. 3.1, 5.3
2328
+ [42] X. Chen and K. He, “Exploring simple siamese representation
2329
+ learning,” in Proceedings of the IEEE/CVF Conference on Computer
2330
+ Vision and Pattern Recognition, 2021, pp. 15 750–15 758. 4.1, 6
2331
+ [43] R. Tanno, A. Saeedi, S. Sankaranarayanan, D. C. Alexander,
2332
+ and N. Silberman, “Learning from noisy labels by regularized
2333
+ estimation of annotator confusion,” in CVPR, 2019. 5.1
2334
+ [44] A. K. Menon, A. S. Rawat, S. J. Reddi, and S. Kumar, “Can gradient
2335
+ clipping mitigate label noise?” in ICLR, 2020. 5.1
2336
+ [45] X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang,
2337
+ “Robust early-learning: Hindering the memorization of noisy
2338
+ labels,” in ICLR, 2021. 5.1
2339
+ [46] X. Zhou, X. Liu, C. Wang, D. Zhai, J. Jiang, and X. Ji, “Learning
2340
+ with noisy labels via sparse regularization,” in ICCV, 2021. 5.1
2341
+ [47] S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, and
2342
+ J. Mohd-Yusof, “Combating label noise in deep learning using
2343
+ abstention,” in ICML, 2019. 5.1
2344
+ [48] D. Tanaka, D. Ikami, T. Yamasaki, and K. Aizawa, “Joint
2345
+ optimization framework for learning with noisy labels,” in CVPR,
2346
+ 2018. 5.1, 6.2
2347
+ [49] J. Li, R. Socher, and S. C. Hoi, “Dividemix: Learning with noisy
2348
+ labels as semi-supervised learning,” in ICLR, 2020. 5.1, 6.2
2349
+ [50] M. Ren, W. Zeng, B. Yang, and R. Urtasun, “Learning to reweight
2350
+ examples for robust deep learning,” in ICML, 2018. 5.1
2351
+ [51] Y. Wang, W. Liu, X. Ma, J. Bailey, H. Zha, L. Song, and S.-T. Xia,
2352
+ “Iterative learning with open-set noisy labels,” in CVPR, 2018. 5.1,
2353
+ 6.1
2354
+ [52] K. Lee, S. Yun, K. Lee, H. Lee, B. Li, and J. Shin, “Robust inference
2355
+ via generative classifiers for handling noisy labels,” in ICML, 2019.
2356
+ 5.1, 6.1
2357
+ [53] H. Dong, Z. Sun, Y. Fu, S. Zhong, Z. Zhang, and Y.-G. Jiang,
2358
+ “Extreme vocabulary learning,” Frontiers of Computer Science,
2359
+ vol. 14, no. 6, pp. 1–12, 2020. 5.1
2360
+ [54] A. Veit, N. Alldrin, G. Chechik, I. Krasin, A. Gupta, and
2361
+ S. Belongie, “Learning from noisy large-scale datasets with
2362
+ minimal supervision,” in CVPR, 2017. 5.1
2363
+ [55] G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu,
2364
+ “Making deep neural networks robust to label noise: A loss
2365
+ correction approach,” in CVPR, 2017. 5.1, 6.1, 6.2
2366
+ [56] E. Arazo, D. Ortego, P. Albert, N. O’Connor, and K. McGuinness,
2367
+ “Unsupervised label noise modeling and loss correction,” in
2368
+ ICML, 2019. 5.1, 6.2
2369
+ [57] A. Vahdat, “Toward robustness against label noise in training deep
2370
+ discriminative neural networks,” NeurIPS, 2017. 5.1
2371
+
2372
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2373
+ 14
2374
+ [58] Y. Li, J. Yang, Y. Song, L. Cao, J. Luo, and L.-J. Li, “Learning from
2375
+ noisy labels with distillation,” in ICCV, 2017. 5.1
2376
+ [59] K. Yi and J. Wu, “Probabilistic end-to-end noise correction for
2377
+ learning with noisy labels,” in CVPR, 2019. 5.1, 6.1, 6.2
2378
+ [60] J. Fan and J. Lv, “A selective overview of variable selection in high
2379
+ dimensional feature space,” Statistica Sinica, 2010. 5.2
2380
+ [61] E.
2381
+ Candes,
2382
+ Y.
2383
+ Fan,
2384
+ L.
2385
+ Janson,
2386
+ and
2387
+ J.
2388
+ Lv,
2389
+ “Panning
2390
+ for
2391
+ gold:‘model-x’knockoffs for high dimensional controlled variable
2392
+ selection,” Journal of the Royal Statistical Society: Series B (Statistical
2393
+ Methodology), vol. 80, no. 3, pp. 551–577, 2018. 5.3
2394
+ [62] A. Krizhevsky, G. Hinton et al., “Learning multiple layers of
2395
+ features from tiny images,” Master’s thesis, University of Tront, 2009.
2396
+ 6
2397
+ [63] W. Li, L. Wang, W. Li, E. Agustsson, and L. Van Gool, “Webvision
2398
+ database: Visual learning and understanding from web data,”
2399
+ arXiv preprint arXiv:1708.02862, 2017. 6
2400
+ [64] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
2401
+ image recognition,” in CVPR, 2016. 6
2402
+ [65] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-
2403
+ v4, inception-resnet and the impact of residual connections on
2404
+ learning,” in AAAI, 2017. 6
2405
+ [66] D. Arpit, S. Jastrzebski, N. Ballas, D. Krueger, E. Bengio, M. S.
2406
+ Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio et al., “A
2407
+ closer look at memorization in deep networks,” in ICML, 2017. 6.1
2408
+ [67] S. E. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and
2409
+ A. Rabinovich, “Training deep neural networks on noisy labels
2410
+ with bootstrapping,” in ICLR (Workshop), 2015. 6.1
2411
+ [68] E. Malach and S. Shalev-Shwartz, “Decoupling” when to update”
2412
+ from” how to update”,” in NeurIPS, 2017. 6.1, 6.2
2413
+ [69] X. Ma, Y. Wang, M. E. Houle, S. Zhou, S. Erfani, S. Xia,
2414
+ S. Wijewickrema, and J. Bailey, “Dimensionality-driven learning
2415
+ with noisy labels,” in ICML, 2018. 6.2
2416
+ [70] W. Zhang, Y. Wang, and Y. Qiao, “Metacleaner: Learning
2417
+ to hallucinate clean representations for noisy-labeled visual
2418
+ recognition,” in Proceedings of the IEEE/CVF Conference on Computer
2419
+ Vision and Pattern Recognition, 2019, pp. 7373–7382. 6.2
2420
+ [71] J. Li, Y. Wong, Q. Zhao, and M. S. Kankanhalli, “Learning to learn
2421
+ from noisy labeled data,” in Proceedings of the IEEE/CVF Conference
2422
+ on Computer Vision and Pattern Recognition, 2019, pp. 5051–5059. 6.2
2423
+ Yikai
2424
+ Wang
2425
+ is
2426
+ a
2427
+ PhD
2428
+ candidate
2429
+ at
2430
+ the
2431
+ School
2432
+ of
2433
+ Data
2434
+ Science,
2435
+ Fudan
2436
+ University,
2437
+ under the supervision of Prof. Yanwei Fu. He
2438
+ received a Bachelor’s degree in mathematics
2439
+ from the School of Mathematical Sciences,
2440
+ Fudan University, in 2019. He published 1
2441
+ IEEE TPAMI paper and 2 CVPR papers. His
2442
+ current research interests include theoretically
2443
+ guaranteed machine learning algorithms and
2444
+ applications to computer vision.
2445
+ Yanwei Fu received his PhD degree from the
2446
+ Queen Mary University of London, in 2014. He
2447
+ worked as a post-doctoral researcher at Disney
2448
+ Research, Pittsburgh, PA, from 2015 to 2016.
2449
+ He is currently a tenure-track professor at Fudan
2450
+ University. He was appointed as the Professor
2451
+ of Special Appointment (Eastern Scholar) at
2452
+ Shanghai Institutions of Higher Learning. He
2453
+ published
2454
+ more
2455
+ than
2456
+ 80
2457
+ journal/conference
2458
+ papers including IEEE TPAMI, TMM, ECCV,
2459
+ and CVPR. His research interests are one-
2460
+ shot/meta-learning, learning-based 3D reconstruction, and image and
2461
+ video understanding in general.
2462
+ Xinwei Sun is currently an assistant professor
2463
+ at the School of Data Science, Fudan University.
2464
+ He
2465
+ received
2466
+ his
2467
+ Ph.D.
2468
+ in
2469
+ the
2470
+ school
2471
+ of
2472
+ mathematical sciences, at Peking University in
2473
+ 2018. His research interests mainly focus on
2474
+ high-dimensional statistics and causal inference,
2475
+ with their applications in machine learning and
2476
+ medical imaging.
2477
+
2478
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2479
+ 15
2480
+ Supplementary Material
2481
+ Yikai Wang, Yanwei Fu, and Xinwei Sun.
2482
+ In this supplementary material, we formally present the
2483
+ proof of FSR control theorem of knockoff-SPR in Sec. 8.
2484
+ For consistency, we also provide the proof of the noisy
2485
+ set recovery theorem of SPR in Sec. 9. Some additional
2486
+ experimental results are provided in Sec. 10.
2487
+ 8
2488
+ FSR CONTROL THEOREM OF KNOCKOFF-SPR
2489
+ Recall that we are solving the problem of
2490
+
2491
+
2492
+
2493
+
2494
+
2495
+ 1
2496
+ 2
2497
+ ���Y2 − X2 ˜β1 − γ2
2498
+ ���
2499
+ 2
2500
+ F + �
2501
+ j P(γ2,j; λ),
2502
+ 1
2503
+ 2
2504
+ ��� ˜Y2 − X2 ˜β1 − ˜γ2
2505
+ ���
2506
+ 2
2507
+ F + �
2508
+ j P(˜γ2,j; λ).
2509
+ (26)
2510
+ We introduce the following lemma from [1] and [2].
2511
+ Lemma 3. Suppose that B1, . . . , Bn are indenpendent variables,
2512
+ with Bi ∼ Bernoulli(ρi) for each i, where mini ρi ≥ ρ > 0. Let
2513
+ J be a stopping time in reverse time with respect to the filtration
2514
+ {Fj}, where
2515
+ Fj = σ ({B1 + · · · + Bj, Bj+1, . . . , Bn}) .
2516
+ Then
2517
+ E
2518
+
2519
+ 1 + J
2520
+ 1 + B1 + · · · + BJ
2521
+
2522
+ ≤ ρ−1.
2523
+ Proof. We first follow [1] to prove the case when {Bi} are
2524
+ i.i.d. variables with Bi ∼ Bernoulli(ρ), where ρ > 0. Then
2525
+ we follow [2] to generalize the conclusion to non-identical
2526
+ case.
2527
+ Define the stochastic process
2528
+ Mj := 1 + j
2529
+ 1 + Sj
2530
+ with
2531
+ Sj := B1 + · · · + Bj
2532
+ (27)
2533
+ We show that {Mj} is a super-martingale with respect to
2534
+ the reverse filtration {Fj}. It is trivial that {Mj} is {Fj}-
2535
+ adapted and {Fj} is reverse filtration, that is a decreasing
2536
+ sequence
2537
+ Fj ⊂ Fj−1 · · · ⊂ {Bi}n
2538
+ i=1
2539
+ (28)
2540
+ with each Fj be a sub-σ-algebras of σ({Bi}n
2541
+ i=1). Further,
2542
+ we have E [|Mj|] ≤ 1 + j ≤ 1 + n < ∞ with fixed n. Now
2543
+ we bound the conditional expectation E[Mj | Fj+1]. Note
2544
+ that since {Bj}i+1
2545
+ j=1 are i.i.d. variable and thus exchangeable
2546
+ when conditioned on Fj+1, then we have
2547
+ P(Bj+1 = 1 | Fj+1) = Sj+1
2548
+ j + 1
2549
+ (29)
2550
+ When Sj+1 = 0, it is natural that Sj = 0 thus Mj = 1 + j <
2551
+ 1 + (j + 1) = Mj+1. When Sj+ > 0, we have
2552
+ E[Mj | Fj+1] =
2553
+ 1 + j
2554
+ 1 + Sj+1 − 1 · P(Bj+1 = 1 | Fj+1)
2555
+ +
2556
+ 1 + j
2557
+ 1 + Sj+1
2558
+ · P(Bj+1 = 0 | Fj+1)
2559
+ =1 + j
2560
+ Sj+1
2561
+ · Sj+1
2562
+ j + 1 +
2563
+ 1 + j
2564
+ 1 + Sj+1
2565
+ · j + 1 − Sj+1
2566
+ j + 1
2567
+ =1 + (j + 1)
2568
+ 1 + Sj+1
2569
+ =Mj+1.
2570
+ (30)
2571
+ Hence we have E[Mj | Fj+1] ≤ Mj+1, which finishes the
2572
+ proof for the super-martingale {Mj}. Then by the Doob’s
2573
+ optional sampling theorem [3], we have
2574
+ E[Mj] ≤ E[Mn].
2575
+ (31)
2576
+ Finally, we have
2577
+ E[Mn] = E[ 1 + n
2578
+ 1 + Sn
2579
+ ]
2580
+ = (1 + n)
2581
+ n
2582
+
2583
+ m=0
2584
+ 1
2585
+ 1 + m ·
2586
+ n!
2587
+ m!(n − m)!ρm(1 − ρ)n−m
2588
+ = ρ−1(1 − (1 − ρ)n+1)
2589
+ ≤ ρ−1.
2590
+ (32)
2591
+ Now it suffices to show that the conclusion also holds for
2592
+ non-identical Bernoulli variables. Following [2], for each
2593
+ Bi ∼ Bernoulli(ρi), we construct the following disjoint
2594
+ Borel sets {Ai
2595
+ j}4
2596
+ j=1 such that R = ∪4
2597
+ j=1Aj with
2598
+ P(Ai
2599
+ 1) = 1 − ρi;
2600
+ P(Ai
2601
+ 2) = ρ1 − ρi
2602
+ 1 − ρ ;
2603
+ P(Ai
2604
+ 3) = ρρi − ρ
2605
+ 1 − ρ ;
2606
+ P(Ai
2607
+ 4) = ρi − ρ.
2608
+ (33)
2609
+ Define Ui = Ai
2610
+ 1∪Ai
2611
+ 2, Vi = Ai
2612
+ 2∪Ai
2613
+ 3, Gi = Ai
2614
+ 2∪Ai
2615
+ 3∪Ai
2616
+ 4. Based
2617
+ on the specific construction we can set Gi = Bi. Further
2618
+ define Qi = 1{ξi ∈ Vi} and a random set A = {i : ξi ∈ Ui}.
2619
+ Then we have
2620
+ Qi · 1{i ∈ A} + 1{i /∈ A}
2621
+ = 1{{ξi ∈ Vi ∩ Ui} ∪ {ξi ∈ U C
2622
+ i }}
2623
+ = 1{{ξi ∈ Ai
2624
+ 2} ∪ {ξi ∈ Ai
2625
+ 3 ∪ Ai
2626
+ 4}}
2627
+ = 1{{ξi ∈ Ai
2628
+ 2 ∪ Ai
2629
+ 3 ∪ Ai
2630
+ 4}} = Bi.
2631
+ (34)
2632
+ Hence
2633
+ 1 + j
2634
+ 1 + Sj
2635
+ = 1 + |i ≤ j : i ∈ A| + |i ≤ j : i /∈ A|
2636
+ 1 + �
2637
+ i≤j,i∈A Qi + |i ≤ j : i /∈ A|
2638
+ ≤ 1 + |i ≤ j : i ∈ A|
2639
+ 1 + �
2640
+ i≤j,i∈A Qi
2641
+ .
2642
+ (35)
2643
+
2644
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2645
+ 16
2646
+ The inequality holds because a+c
2647
+ b+c ≤ a
2648
+ b for 0 < b ≤ a, c ≥ 0.
2649
+ Note that by definition
2650
+ P(Qi = 1 | i ∈ A) = P(ξi ∈ Vi | ξi ∈ Ui)
2651
+ =
2652
+ P(Ai
2653
+ 2)
2654
+ P(Ai
2655
+ 1 ∪ Ai
2656
+ 2)
2657
+ = ρ = P(Qi = 1),
2658
+ P(Qi = 1 | i ̸∈ A) = P(ξi ∈ Vi | ξi /∈ Ui)
2659
+ =
2660
+ P(Ai
2661
+ 3)
2662
+ P(Ai
2663
+ 3 ∪ Ai
2664
+ 4)
2665
+ = ρ = P(Qi = 1).
2666
+ (36)
2667
+ indicating that Qi and A are independent.
2668
+ For any fixed A, define ˜Qi = Qi · 1{i ∈ A} and the reverse
2669
+ filtration ˜Fj = σ({�j
2670
+ k=1 ˜Qk, ˜Qj+1, . . . , ˜Qn, A}). Then when
2671
+ conditioned on A, the established result suggests that
2672
+ E
2673
+
2674
+ 1 + |i ≤ j : i ∈ A|
2675
+ 1 + �
2676
+ i≤j,i∈A Qi
2677
+ ����A
2678
+
2679
+ ≤ ρ−1.
2680
+ (37)
2681
+ Take expectation over A finishes the proof.
2682
+ 8.1
2683
+ Proof of Theorem 2
2684
+ Proof. We first control the FSR rate of the second subset.
2685
+ Specifically, we have
2686
+ FSR(T) ≤ E
2687
+ � # {j : γj ̸= 0 and − T ≤ Wj < 0}
2688
+ 1 + # {j : γj ̸= 0 and 0 < Wj ≤ T}
2689
+ · 1 + # {j : 0 < Wj ≤ T}
2690
+ # {j : −T ≤ Wj < 0} ∨ 1
2691
+
2692
+ ≤ q · E
2693
+ � # {j : γj ̸= 0 and − T ≤ Wj < 0}
2694
+ 1 + # {j : γj ̸= 0 and 0 < Wj ≤ T}
2695
+
2696
+ .
2697
+ (38)
2698
+ The second inequality holds by the definition of T. Now it
2699
+ suffices to show that
2700
+ E
2701
+ � # {j : γj ̸= 0 and − T ≤ Wj < 0}
2702
+ 1 + # {j : γj ̸= 0 and 0 < Wj ≤ T}
2703
+
2704
+
2705
+ c
2706
+ c − 2.
2707
+ (39)
2708
+ For γj ̸= 0, we have a probability of
2709
+ 1
2710
+ c−1 to get a clean
2711
+ ˜γj, leading to Zj > Zj+n with non-zero probability, and
2712
+ a probability of c−2
2713
+ c−1 to get a noisy ˜γj, where we have no
2714
+ information and hence assume a equal probability of Zj >
2715
+ Zj+n and Zj < Zj+n. Then we have
2716
+ P(Wi > 0) =P(Wi > 0|˜γ∗
2717
+ j ̸= 0)P(˜γ∗
2718
+ j ̸= 0)
2719
+ + P(Wi > 0|˜γ∗
2720
+ j = 0)P(˜γ∗
2721
+ j = 0)
2722
+ ≥1
2723
+ 2 × c − 2
2724
+ c − 1 + 0 =
2725
+ c − 2
2726
+ 2(c − 1)
2727
+ (40)
2728
+ Hence the random variable Bj := 1{Wj>0} ∼ Bernoulli(ρj)
2729
+ for γj ̸= 0 with ρj ≥ (c − 2)/(2(c − 1)).
2730
+ Now we consider all the Wj of non-null variables, and
2731
+ assumes |W1| ≤ · · · ≤ |Wn| with the abuse of subscripts.
2732
+ We have
2733
+ γj ̸= 0 and − T ≤ Wj < 0
2734
+ ⇐⇒
2735
+ j ≤ J and Bj = 0
2736
+ and
2737
+ γj ̸= 0 and 0 < Wj ≤ T
2738
+ ⇐⇒
2739
+ j ≤ J and Bj = 1
2740
+ Hence
2741
+ # {j : γj ̸= 0 and − T ≤ Wj < 0}
2742
+ 1 + # {j : γj ̸= 0 and 0 < Wj ≤ T}
2743
+ = (1 − B1) + · · · + (1 − BJ)
2744
+ 1 + B1 + · · · + BJ
2745
+ =
2746
+ 1 + J
2747
+ 1 + B1 + · · · + BJ
2748
+ − 1.
2749
+ (41)
2750
+ If we can use Lemma 3, then
2751
+ E
2752
+ � # {j : γj ̸= 0 and − T ≤ Wj < 0}
2753
+ 1 + # {j : γj ̸= 0 and 0 < Wj ≤ T}
2754
+
2755
+ ≤ ρ−1−1 ≤
2756
+ c
2757
+ c − 2
2758
+ (42)
2759
+ Then we finally get
2760
+ FSR(T) ≤ q
2761
+ c
2762
+ c − 2.
2763
+ (43)
2764
+ as long as c > 2. Now it suffices to show that our
2765
+ random variables {Bj} are mutually independent. This is
2766
+ straightforward as we set P(α2; λ) as a sparse penalty for
2767
+ each row α2,j in Eq. (26), respectively. Then problem of
2768
+ Eq. (26) now is a combination of independent sub-problems
2769
+ for each row α2,j, and the solution only depends on
2770
+ (x2,j, y2,j, β(λ; D1)). Then with fixed β(λ; D1), the mutual
2771
+ independence naturally exist.
2772
+ Finally, after we control the FSR rate for the second subset,
2773
+ we can get the estimate of β(λ; D2) based on the identified
2774
+ clean data in the second subset, and return to run knockoff-
2775
+ SPR on the first subset in a similar pipeline. Then we have
2776
+ for the whole dataset:
2777
+ FSR = E
2778
+ �|S1 ∩ C1| + |S2 ∩ C2|
2779
+ |C1| + |C2|
2780
+
2781
+ ≤ E
2782
+ �|S1 ∩ C1|
2783
+ |C1|
2784
+
2785
+ + E
2786
+ �|S2 ∩ C2|
2787
+ |C2|
2788
+
2789
+ ≤ 2
2790
+ c
2791
+ c − 2q.
2792
+ (44)
2793
+ To control the FSR with q, the threshold of T should be
2794
+ defined as c−2
2795
+ 2c q, which leads to Theorem 2.
2796
+ 9
2797
+ NOISY SET RECOVERY THEOREM OF SPR
2798
+ Recall that we are solving the problem of
2799
+ min
2800
+ ⃗γ
2801
+ ���⃗y − ˚
2802
+ X⃗γ
2803
+ ���
2804
+ 2
2805
+ 2 + λ ∥⃗γ∥1 .
2806
+ (45)
2807
+ Proposition 4. Assume that ˚
2808
+ X⊤ ˚
2809
+ X is invertible. If
2810
+ ����λ ˚
2811
+ X⊤
2812
+ Sc ˚
2813
+ XS
2814
+
2815
+ ˚
2816
+ X⊤
2817
+ S ˚
2818
+ XS
2819
+ �−1
2820
+ ˆvS + ˚
2821
+ X⊤
2822
+ Sc (I − IS) ( ˚
2823
+ Xε)
2824
+ ����
2825
+
2826
+ < λ
2827
+ (46)
2828
+ holds for all ˆvS ∈ [−1, 1]S, where IS = ˚
2829
+ XS
2830
+
2831
+ ˚
2832
+ X⊤
2833
+ S ˚
2834
+ XS
2835
+ �−1 ˚
2836
+ X⊤
2837
+ S ,
2838
+ then the estimator ˆ⃗γ of Eq. (45) satisfies that
2839
+ ˆS = supp
2840
+ �ˆ⃗γ
2841
+
2842
+ ⊆ supp (⃗γ∗) = S.
2843
+ Moreover, if the sign consistency
2844
+ sign
2845
+ �ˆ⃗γS
2846
+
2847
+ = sign (⃗γ∗
2848
+ S)
2849
+ (47)
2850
+ holds, Then ˆ⃗γ is the unique solution of (45) with the same sign as
2851
+ ˆ⃗γ∗.
2852
+
2853
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2854
+ 17
2855
+ Proof. Note that Eq. (45) is convex that has global minima.
2856
+ Denote Eq. (45) as L, the solution of ∂L/∂⃗γ = 0 is the
2857
+ unique minimizer. Hence we have
2858
+ ∂L
2859
+ ∂⃗γ = − ˚
2860
+ X⊤ �
2861
+ ⃗y − ˚
2862
+ X⃗γ
2863
+
2864
+ + λv = 0
2865
+ (48)
2866
+ where v = ∂ ∥⃗γ∥1 /∂⃗γ. Note that ∥⃗γ∥1 is non-differentiable,
2867
+ so we instead compute its sub-gradient. Further note that
2868
+ vi = ∂ ∥⃗γ∥1 /∂⃗γi = ∂ |⃗γi| /∂γi. Hence vi = sign (⃗γi) if ⃗γi ̸=
2869
+ 0 and vi ∈ [−1, 1] if ⃗γi = 0. To distinguish between the two
2870
+ cases, we assume vi ∈ (−1, 1) if ⃗γi = 0. Hence there exists
2871
+ ˆv ∈ Rn×1 such that
2872
+ − ˚
2873
+ X⊤ �
2874
+ ⃗y − ˚
2875
+ X ˆ⃗γ
2876
+
2877
+ + λˆv = 0,
2878
+ (49)
2879
+ ˆvi = sign
2880
+ �ˆ⃗γi
2881
+
2882
+ if i ∈ ˆS and ˆvi ∈ (−1, 1) if i ∈ ˆSc.
2883
+ To obtain ˆS ⊆ S, we should have ˆ⃗γi = 0 for i ∈ Sc, that is,
2884
+ ∀i ∈ Sc, |ˆvi| < 1, i.e.,
2885
+ ��� ˚
2886
+ X⊤
2887
+ Sc
2888
+
2889
+ ⃗y − ˚
2890
+ XS ˆ⃗γS
2891
+ ����
2892
+ ∞ < λ,
2893
+ (50)
2894
+ For i ∈ S, we have
2895
+ − ˚
2896
+ X⊤
2897
+ S
2898
+
2899
+ ⃗y − ˚
2900
+ XS ˆ⃗γS
2901
+
2902
+ + λˆvS = 0.
2903
+ (51)
2904
+ If ˚
2905
+ X⊤ ˚
2906
+ X is invertible then
2907
+ ˆ⃗γS =
2908
+
2909
+ ˚
2910
+ X⊤
2911
+ S ˚
2912
+ XS
2913
+ �−1 �
2914
+ ˚
2915
+ X⊤
2916
+ S ⃗y − λˆvS
2917
+
2918
+ (52)
2919
+ Recall that we have
2920
+ ⃗y = ˚
2921
+ XS⃗γ∗
2922
+ S + ˚
2923
+ X⃗ε
2924
+ (53)
2925
+ Hence
2926
+ ˆ⃗γS = ⃗γ∗
2927
+ S+δS,
2928
+ δS :=
2929
+
2930
+ ˚
2931
+ X⊤
2932
+ S ˚
2933
+ XS
2934
+ �−1 �
2935
+ ˚
2936
+ X⊤
2937
+ S ˚
2938
+ X⃗ε − λˆvS
2939
+
2940
+ . (54)
2941
+ Plugging (54) and (53) into (50) we have
2942
+ ���� ˚
2943
+ X⊤
2944
+ Sc ˚
2945
+ X⃗ε − ˚
2946
+ X⊤
2947
+ Sc ˚
2948
+ XS
2949
+
2950
+ ˚
2951
+ X⊤
2952
+ S ˚
2953
+ XS
2954
+ �−1 �
2955
+ ˚
2956
+ X⊤
2957
+ S ˚
2958
+ X⃗ε − λˆvS
2959
+ �����
2960
+
2961
+ < λ,
2962
+ (55)
2963
+ or equivalently
2964
+ ����λ ˚
2965
+ X⊤
2966
+ Sc ˚
2967
+ XS
2968
+
2969
+ ˚
2970
+ X⊤
2971
+ S ˚
2972
+ XS
2973
+ �−1
2974
+ ˆvS + ˚
2975
+ X⊤
2976
+ Sc (I − IS) ˚
2977
+ X⃗ε
2978
+ ����
2979
+
2980
+ < λ,
2981
+ (56)
2982
+ where IS
2983
+ =
2984
+ ˚
2985
+ XS
2986
+
2987
+ ˚
2988
+ X⊤
2989
+ S ˚
2990
+ XS
2991
+ �−1 ˚
2992
+ X⊤
2993
+ S . To ensure the sign
2994
+ consistency, replacing ˆvS
2995
+ = sign (⃗γ∗
2996
+ S) in the inequality
2997
+ above leads to the final result.
2998
+ Lemma 5. Assume that ⃗ε is indenpendent sub-Gaussian with
2999
+ zero mean and bounded variance Var (⃗εi) ≤ σ2.
3000
+ Then with probability at least
3001
+ 1 − 2cn exp
3002
+
3003
+
3004
+ �−
3005
+ λ2η2
3006
+ 2σ2 maxi∈Sc
3007
+ ��� ˚
3008
+ Xi
3009
+ ���
3010
+ 2
3011
+ 2
3012
+
3013
+
3014
+
3015
+ (57)
3016
+ there holds
3017
+ ��� ˚
3018
+ X⊤
3019
+ Sc (I − IS)
3020
+
3021
+ ˚
3022
+ X⃗ε
3023
+ ����
3024
+ ∞ ≤ λη
3025
+ (58)
3026
+ and����
3027
+
3028
+ ˚
3029
+ X⊤
3030
+ S ˚
3031
+ XS
3032
+ �−1 ˚
3033
+ X⊤
3034
+ S ˚
3035
+ X⃗ε
3036
+ ����
3037
+
3038
+
3039
+ λη
3040
+ √Cmin maxi∈Sc
3041
+ ��� ˚
3042
+ Xi
3043
+ ���
3044
+ 2
3045
+ . (59)
3046
+ Proof. Let zc = ˚
3047
+ X⊤
3048
+ Sc (I − IS)
3049
+
3050
+ ˚
3051
+ X⃗ε
3052
+
3053
+ , for each i ∈ Sc the
3054
+ variance can be bounded by
3055
+ Var (zc
3056
+ i ) ≤ σ2 ˚
3057
+ X⊤
3058
+ i (I − IS)2 ˚
3059
+ Xi ≤ σ2 max
3060
+ i∈Sc
3061
+ ��� ˚
3062
+ Xi
3063
+ ���
3064
+ 2
3065
+ 2 .
3066
+ Hoeffding inequality implies that
3067
+ P
3068
+ ���� ˚
3069
+ X⊤
3070
+ Sc (I − IS)
3071
+
3072
+ ˚
3073
+ X⃗ε
3074
+ ����
3075
+ ∞ ≥ t
3076
+
3077
+ ≤ 2 |Sc| exp
3078
+
3079
+
3080
+ �−
3081
+ t2
3082
+ 2σ2 maxi∈Sc
3083
+ ��� ˚
3084
+ Xi
3085
+ ���
3086
+ 2
3087
+ 2
3088
+
3089
+
3090
+ � ,
3091
+ Setting t = λη leads to the result.
3092
+ Now let z =
3093
+
3094
+ ˚
3095
+ X⊤
3096
+ S ˚
3097
+ XS
3098
+ �−1 ˚
3099
+ X⊤
3100
+ S ˚
3101
+ X⃗ε, we have
3102
+ Var (z) =
3103
+
3104
+ ˚
3105
+ X⊤
3106
+ S ˚
3107
+ XS
3108
+ �−1 ˚
3109
+ X⊤
3110
+ S ˚
3111
+ XVar (⃗ε) ˚
3112
+ X⊤ ˚
3113
+ XS
3114
+
3115
+ ˚
3116
+ X⊤
3117
+ S ˚
3118
+ XS
3119
+ �−1
3120
+ ≤ σ2 �
3121
+ ˚
3122
+ X⊤
3123
+ S ˚
3124
+ XS
3125
+ �−1
3126
+
3127
+ σ2
3128
+ Cmin
3129
+ I.
3130
+ Then
3131
+ P
3132
+ �����
3133
+
3134
+ ˚
3135
+ X⊤
3136
+ S ˚
3137
+ XS
3138
+ �−1 ˚
3139
+ X⊤
3140
+ S ˚
3141
+ X⃗ε
3142
+ ����
3143
+
3144
+ ≥ t
3145
+
3146
+ ≤ 2 |S| exp
3147
+
3148
+ −t2Cmin
3149
+ 2σ2
3150
+
3151
+ .
3152
+ Choose
3153
+ t =
3154
+ λη
3155
+ √Cmin maxi∈Sc
3156
+ ��� ˚
3157
+ Xi
3158
+ ���
3159
+ 2
3160
+ ,
3161
+ (60)
3162
+ then there holds
3163
+ P
3164
+
3165
+
3166
+ �∥
3167
+
3168
+ ˚
3169
+ X⊤
3170
+ S ˚
3171
+ XS
3172
+ �−1 ˚
3173
+ X⊤
3174
+ S ˚
3175
+ X⃗ε∥∞ ≥
3176
+ λη
3177
+ √Cmin maxi∈Sc
3178
+ ��� ˚
3179
+ Xi
3180
+ ���
3181
+ 2
3182
+
3183
+
3184
+
3185
+ ≤ 2 |S| exp
3186
+
3187
+
3188
+ �−
3189
+ λ2η2
3190
+ 2σ2 maxi∈Sc
3191
+ ��� ˚
3192
+ Xi
3193
+ ���
3194
+ 2
3195
+ 2
3196
+
3197
+
3198
+ � .
3199
+ 9.1
3200
+ Proof of Theorem 1
3201
+ Proof. The proof essentially follows the treatment in [4].
3202
+ The results follow by applying Lemma 5 to Proposition 4.
3203
+ Inequality (46) holds if condition C2 and the first bound (58)
3204
+ hold, which proves the first part of the theorem. The
3205
+ sign consistency (47) holds if condition C3 and the second
3206
+ bound (59) hold, which gives the second part of the theorem.
3207
+ It suffices to show that ˆS ⊆ S implies ˆCc ⊆ Cc. Consider
3208
+ one instance i, there are three possible cases for γ∗
3209
+ i ∈ R1×c:
3210
+ (1) γ∗
3211
+ i,j ̸= 0, ∀j ∈ [c]; (2) γ∗
3212
+ i,j = 0, ∀j ∈ [c]; (3) ∃j, k ∈
3213
+ [c] , s.t. γ∗
3214
+ i,j = 0, γ∗
3215
+ i,k ̸= 0. If instance i follows case (1) or
3216
+ case (3), then i ∈ Cc. If it follows case (2), then i ∈ C, and
3217
+ the indexes of all elements of γi are in Sc. Since we have
3218
+ ˆS ⊆ S, all elements of γi is in ˆSc, hence i ∈ ˆC. Then we
3219
+ have ˆCc ⊆ Cc.
3220
+
3221
+ JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
3222
+ 18
3223
+ 10
3224
+ MORE EXPERIMENTAL RESULTS
3225
+ Histogram of the median value of IRR condition of
3226
+ SPR. We visualize the median value of the irrepresentable
3227
+ (IRR) value, i.e., {∥(X⊤
3228
+ S XS)−1X⊤
3229
+ S Xj∥1}j of SPR final epoch
3230
+ on CIFAR10 with various noisy scenarios in Fig. 5. As
3231
+ SPR is running on each piece split from the training set,
3232
+ we calculate matrix ˚
3233
+ X⊤
3234
+ Sc ˚
3235
+ XS( ˚
3236
+ X⊤
3237
+ S ˚
3238
+ XS)−1 in irrepresentable
3239
+ condition (C2 in Theorem 1) for each piece at the final
3240
+ epoch. Then the L1 norm of each row of the matrix is the
3241
+ IRR value of corresponding clean data. The median value
3242
+ of IRR values in a single piece is used to construct the
3243
+ histogram. For the noise scenario of Asy. 40% and Sym. 40%,
3244
+ the median IRR value is small, indicating weak collinearity
3245
+ between clean data and noisy data. In these cases, SPR
3246
+ has more chance to distinguish noisy data from clean data
3247
+ and thus leads to a good FSR control capacity. For the
3248
+ noise scenario of Sym. 80%, the median IRR values are
3249
+ much larger, indicating a strong multi-collinearity. Thus SPR
3250
+ can hardly distinguish between clean data and noisy data,
3251
+ leading to a high FSR rate.
3252
+ 0.8
3253
+ 1.0
3254
+ 1.2
3255
+ 1.4
3256
+ 1.6
3257
+ 1.8
3258
+ 2.0
3259
+ 0.0
3260
+ 2.5
3261
+ 5.0
3262
+ 7.5
3263
+ 10.0
3264
+ 12.5
3265
+ 15.0
3266
+ 17.5
3267
+ 20.0
3268
+ Symmetric-40%
3269
+ 4
3270
+ 6
3271
+ 8
3272
+ 10
3273
+ 12
3274
+ 14
3275
+ 16
3276
+ 0
3277
+ 5
3278
+ 10
3279
+ 15
3280
+ 20
3281
+ 25
3282
+ Symmetric-80%
3283
+ 0.0
3284
+ 0.2
3285
+ 0.4
3286
+ 0.6
3287
+ 0.8
3288
+ 1.0
3289
+ 1.2
3290
+ 1.4
3291
+ 0
3292
+ 10
3293
+ 20
3294
+ 30
3295
+ 40
3296
+ 50
3297
+ 60
3298
+ Asymmetric-40%
3299
+ Fig. 5. Histogram of the median value of the IRR value of SPR on
3300
+ CIFAR10 with various noisy scenarios.
3301
+ REFERENCES
3302
+ [1] Rina Foygel Barber and Emmanuel J. Cand‘es. A knockoff filter
3303
+ for high-dimensional selective inference. The Annals of Statistics,
3304
+ 47(5):2504 – 2537, 20 (document), 3.1, 5.3, 8, 8
3305
+ [2] Yang Cao, Xinwei Sun, and Yuan Yao. Controlling the false
3306
+ discovery rate in transformational sparsity: Split knockoffs. In
3307
+ arXiv, 2021. (document), 3.1, 5.3, 8, 8, 8
3308
+ [3] Joseph L Doob. Stochastic processes. Wiley New York, 195 8
3309
+ [4] M.
3310
+ J.
3311
+ Wainwright,
3312
+ “Sharp
3313
+ thresholds
3314
+ for
3315
+ high-dimensional
3316
+ and noisy sparsity recovery using l1 -constrained quadratic
3317
+ programming (lasso),” IEEE transactions on information theory,
3318
+ 2009. (document), 2.3, 9.1
3319
+
-tAyT4oBgHgl3EQfqfjJ/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -6294,3 +6294,61 @@ _tE1T4oBgHgl3EQfDAJg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
6294
  q9E1T4oBgHgl3EQf2wUi/content/2301.03481v1.pdf filter=lfs diff=lfs merge=lfs -text
6295
  TNE2T4oBgHgl3EQfWwfK/content/2301.03838v1.pdf filter=lfs diff=lfs merge=lfs -text
6296
  9NAyT4oBgHgl3EQfQ_Zv/content/2301.00057v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6294
  q9E1T4oBgHgl3EQf2wUi/content/2301.03481v1.pdf filter=lfs diff=lfs merge=lfs -text
6295
  TNE2T4oBgHgl3EQfWwfK/content/2301.03838v1.pdf filter=lfs diff=lfs merge=lfs -text
6296
  9NAyT4oBgHgl3EQfQ_Zv/content/2301.00057v1.pdf filter=lfs diff=lfs merge=lfs -text
6297
+ 2tE2T4oBgHgl3EQfjAfh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6298
+ RdAzT4oBgHgl3EQfI_vE/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6299
+ a9E2T4oBgHgl3EQfwAgi/content/2301.04096v1.pdf filter=lfs diff=lfs merge=lfs -text
6300
+ k9E2T4oBgHgl3EQfIwYe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6301
+ hNAyT4oBgHgl3EQfxfmy/content/2301.00668v1.pdf filter=lfs diff=lfs merge=lfs -text
6302
+ m9FKT4oBgHgl3EQfEy02/content/2301.11717v1.pdf filter=lfs diff=lfs merge=lfs -text
6303
+ GdE1T4oBgHgl3EQf_Aah/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6304
+ ztAzT4oBgHgl3EQfQ_vk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6305
+ 6dE3T4oBgHgl3EQfRAnz/content/2301.04418v1.pdf filter=lfs diff=lfs merge=lfs -text
6306
+ LNAyT4oBgHgl3EQfsfn-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6307
+ qNE2T4oBgHgl3EQf0giO/content/2301.04142v1.pdf filter=lfs diff=lfs merge=lfs -text
6308
+ 8dFAT4oBgHgl3EQfpB03/content/2301.08637v1.pdf filter=lfs diff=lfs merge=lfs -text
6309
+ S9E4T4oBgHgl3EQfLgwd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6310
+ TNE1T4oBgHgl3EQfIAO7/content/2301.02934v1.pdf filter=lfs diff=lfs merge=lfs -text
6311
+ q9FIT4oBgHgl3EQfxSv4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6312
+ TNE1T4oBgHgl3EQfIAO7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6313
+ HtA0T4oBgHgl3EQfB_9Y/content/2301.01983v1.pdf filter=lfs diff=lfs merge=lfs -text
6314
+ t9E3T4oBgHgl3EQfNgks/content/2301.04383v1.pdf filter=lfs diff=lfs merge=lfs -text
6315
+ ydAyT4oBgHgl3EQfa_fL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6316
+ 49E1T4oBgHgl3EQf6QXo/content/2301.03522v1.pdf filter=lfs diff=lfs merge=lfs -text
6317
+ i9E3T4oBgHgl3EQfJAlP/content/2301.04339v1.pdf filter=lfs diff=lfs merge=lfs -text
6318
+ QdE0T4oBgHgl3EQfUACA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6319
+ LtE3T4oBgHgl3EQfAwm8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6320
+ qNE2T4oBgHgl3EQf0giO/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6321
+ t9E3T4oBgHgl3EQfNgks/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6322
+ 49E1T4oBgHgl3EQf6QXo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6323
+ s9E1T4oBgHgl3EQfjgRh/content/2301.03263v1.pdf filter=lfs diff=lfs merge=lfs -text
6324
+ a9E2T4oBgHgl3EQfwAgi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6325
+ C9E1T4oBgHgl3EQfEANP/content/2301.02884v1.pdf filter=lfs diff=lfs merge=lfs -text
6326
+ K9E1T4oBgHgl3EQfYwRv/content/2301.03142v1.pdf filter=lfs diff=lfs merge=lfs -text
6327
+ n9FAT4oBgHgl3EQfdR2i/content/2301.08569v1.pdf filter=lfs diff=lfs merge=lfs -text
6328
+ PNE4T4oBgHgl3EQf-A73/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6329
+ atFST4oBgHgl3EQfBjj8/content/2301.13704v1.pdf filter=lfs diff=lfs merge=lfs -text
6330
+ ltE1T4oBgHgl3EQf0wVQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6331
+ kdAyT4oBgHgl3EQfkvi4/content/2301.00440v1.pdf filter=lfs diff=lfs merge=lfs -text
6332
+ ydE3T4oBgHgl3EQfmApc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6333
+ C9E1T4oBgHgl3EQfEANP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6334
+ vdFLT4oBgHgl3EQfjy_2/content/2301.12113v1.pdf filter=lfs diff=lfs merge=lfs -text
6335
+ q9E1T4oBgHgl3EQf2wUi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6336
+ EdE2T4oBgHgl3EQfSgfT/content/2301.03794v1.pdf filter=lfs diff=lfs merge=lfs -text
6337
+ EdFRT4oBgHgl3EQfBDfd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6338
+ CdE5T4oBgHgl3EQfTw8s/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6339
+ 8dFAT4oBgHgl3EQfpB03/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6340
+ StFJT4oBgHgl3EQfLywb/content/2301.11470v1.pdf filter=lfs diff=lfs merge=lfs -text
6341
+ 89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf filter=lfs diff=lfs merge=lfs -text
6342
+ ZtAzT4oBgHgl3EQfm_2G/content/2301.01573v1.pdf filter=lfs diff=lfs merge=lfs -text
6343
+ atFST4oBgHgl3EQfBjj8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6344
+ A9FJT4oBgHgl3EQfry2f/content/2301.11610v1.pdf filter=lfs diff=lfs merge=lfs -text
6345
+ StFJT4oBgHgl3EQfLywb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6346
+ 6dE3T4oBgHgl3EQfRAnz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6347
+ V9E1T4oBgHgl3EQfbQST/content/2301.03171v1.pdf filter=lfs diff=lfs merge=lfs -text
6348
+ y9E4T4oBgHgl3EQfYwwf/content/2301.05051v1.pdf filter=lfs diff=lfs merge=lfs -text
6349
+ ydE3T4oBgHgl3EQfmApc/content/2301.04612v1.pdf filter=lfs diff=lfs merge=lfs -text
6350
+ YdA0T4oBgHgl3EQfFv8F/content/2301.02035v1.pdf filter=lfs diff=lfs merge=lfs -text
6351
+ l9E2T4oBgHgl3EQfJAaa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6352
+ ZtFJT4oBgHgl3EQf7i35/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6353
+ EdE2T4oBgHgl3EQfSgfT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6354
+ CdE1T4oBgHgl3EQfpwUV/content/2301.03334v1.pdf filter=lfs diff=lfs merge=lfs -text
19E0T4oBgHgl3EQfugE5/content/tmp_files/2301.02605v1.pdf.txt ADDED
@@ -0,0 +1,1611 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Generic transversality of radially symmetric
2
+ stationary solutions stable at infinity for
3
+ parabolic gradient systems
4
+ Emmanuel Risler
5
+ January 9, 2023
6
+ This paper is devoted to the generic transversality of radially symmetric
7
+ stationary solutions of nonlinear parabolic systems of the form
8
+ ∂tw(x, t) = −∇V
9
+ �w((x, t))
10
+ � + ∆xw(x, t) ,
11
+ where the space variable x is multidimensional and unbounded. It is proved
12
+ that, generically with respect to the potential V , radially symmetric stationary
13
+ solutions that are stable at infinity (in other words, that approach a minimum
14
+ point of V at infinity in space) are transverse; as a consequence, the set of
15
+ such solutions is discrete. This result can be viewed as the extension to
16
+ higher space dimensions of the generic elementarity of symmetric standing
17
+ pulses, proved in a companion paper. It justifies the generic character of the
18
+ discreteness hypothesis concerning this set of stationary solutions, made in
19
+ another companion paper devoted to the global behaviour of (time dependent)
20
+ radially symmetric solutions stable at infinity for such systems.
21
+ 2020 Mathematics Subject Classification: 35K57, 37C20, 37C29.
22
+ Key words and phrases: parabolic gradient systems, radially symmetric stationary solutions, generic
23
+ transversality, Morse–Smale theorem.
24
+ 1
25
+ arXiv:2301.02605v1 [math.AP] 6 Jan 2023
26
+
27
+ Contents
28
+ 1
29
+ Introduction
30
+ 3
31
+ 1.1
32
+ An insight into the main result . . . . . . . . . . . . . . . . . . . . . . . .
33
+ 3
34
+ 1.2
35
+ Radially symmetric stationary solutions stable at infinity
36
+ . . . . . . . . .
37
+ 3
38
+ 1.3
39
+ Differential systems governing radially symmetric stationary solutions
40
+ . .
41
+ 4
42
+ 1.4
43
+ Transversality of radially symmetric stationary solutions stable at infinity
44
+ 7
45
+ 1.5
46
+ The space of potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
+ 8
48
+ 1.6
49
+ Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
+ 8
51
+ 1.7
52
+ Key differences with the generic transversality of standing pulses in space
53
+ dimension one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
+ 9
55
+ 2
56
+ Preliminary properties
57
+ 10
58
+ 2.1
59
+ Proof of Lemma 1.4
60
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
+ 10
62
+ 2.2
63
+ Transversality of homogeneous radially symmetric stationary solutions
64
+ stable at infinity
65
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
+ 11
67
+ 2.3
68
+ Additional properties close to the origin . . . . . . . . . . . . . . . . . . .
69
+ 13
70
+ 2.4
71
+ Additional properties close to infinity . . . . . . . . . . . . . . . . . . . . .
72
+ 14
73
+ 3
74
+ Tools for genericity
75
+ 15
76
+ 4
77
+ Generic transversality among potentials that are quadratic past a given radius 17
78
+ 4.1
79
+ Notation and statement . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
+ 17
81
+ 4.2
82
+ Reduction to a local statement
83
+ . . . . . . . . . . . . . . . . . . . . . . . .
84
+ 17
85
+ 4.3
86
+ Proof of the local statement (Proposition 4.2) . . . . . . . . . . . . . . . .
87
+ 18
88
+ 4.3.1
89
+ Setting
90
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
+ 18
92
+ 4.3.2
93
+ Equivalent characterizations of transversality . . . . . . . . . . . .
94
+ 19
95
+ 4.3.3
96
+ Checking hypothesis 1 of Theorem 4.2 of [1] . . . . . . . . . . . . .
97
+ 20
98
+ 4.3.4
99
+ Checking hypothesis 2 of Theorem 4.2 of [1] . . . . . . . . . . . . .
100
+ 21
101
+ 4.3.5
102
+ Conclusion
103
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
+ 23
105
+ 5
106
+ Proof of the main results
107
+ 24
108
+ 2
109
+
110
+ 1 Introduction
111
+ 1.1 An insight into the main result
112
+ The purpose of this paper is to prove the generic transversality of radially symmetric
113
+ stationary solutions stable at infinity for gradient systems of the form
114
+ (1.1)
115
+ ∂tw(x, t) = −∇V
116
+ �w((x, t))
117
+ � + ∆xw(x, t) ,
118
+ where time variable t is real, space variable x lies in the spatial domain Rdsp with dsp an
119
+ integer not smaller than 2, the state function (x, t) �→ w(x, t) takes its values in Rdst with
120
+ dst a positive integer, and the nonlinearity is the gradient of a scalar potential function
121
+ V : Rdst → R, which is assumed to be regular (of class at least C2). An insight into the
122
+ main result of this paper (Theorem 1 on page 8) is provided by the following corollary.
123
+ Corollary 1.1. For a generic potential V , the following conclusions hold:
124
+ 1. every radially symmetric stationary solution stable at infinity of system (1.1) is
125
+ robust with respect to small perturbations of V ;
126
+ 2. the set of all such solutions is discrete.
127
+ The discreteness stated in conclusion 2 of this corollary is a required assumption for
128
+ the main result of [4], which describes the global behaviour of radially symmetric (time
129
+ dependent) solutions stable at infinity for the parabolic system (1.1). Corollary 1.1
130
+ provides a rigorous proof that this assumption holds generically with respect to V .
131
+ This paper can be viewed as a supplement of the article [1], which is devoted to the
132
+ generic transversality of bistable travelling fronts and standing pulses stable at infinity
133
+ for parabolic systems of the form (1.1) in (unbounded) space dimension one, and which
134
+ provides a rigorous proof of the genericity of similar assumptions made in [2, 3, 5]. The
135
+ ideas, the nature of the results, and the scheme of the proof are the same.
136
+ 1.2 Radially symmetric stationary solutions stable at infinity
137
+ A function u : [0, +∞) → Rdst, r �→ u(r) defines a radially symmetric stationary solution
138
+ of the parabolic system (1.1) if and only if it satisfies, on (0, +∞), the (non-autonomous)
139
+ differential system
140
+ (1.2)
141
+ ¨u(r) = −dsp − 1
142
+ r
143
+ ˙u(r) + ∇V
144
+ �u(r)
145
+ � ,
146
+ where ˙u and ¨u stand for the first and second derivatives of r �→ u(r), together with the
147
+ limit
148
+ (1.3)
149
+ ˙u(r) → 0
150
+ as
151
+ r → 0+ .
152
+ Observe that, in this case, u(·) is actually the restriction to [0, +∞) of an even function in
153
+ C3(R, Rd
154
+ st) which is a solution (on R) of the differential system (1.2) (the limit (1.3) ensures
155
+ 3
156
+
157
+ that equality (1.2) still makes sense and holds at r equals 0). In other words, provided
158
+ that condition (1.3) holds, it is equivalent to assume that system (1.2) holds on (0, +∞)
159
+ or on [0, +∞). By abuse of language, the terminology radially symmetric stationary
160
+ solution of system (1.1) will refer, all along the paper, to functions u : [0, +∞) → Rdst
161
+ satisfying these conditions (1.2) and (1.3) (even if, formally, it is rather the function
162
+ Rd
163
+ sp → Rd
164
+ st, x �→ u
165
+ �|x|
166
+ � that fits with this terminology).
167
+ Let us denote by Σmin(V ) the set of nondegenerate (local or global) minimum points
168
+ of V ; with symbols,
169
+ Σmin(V ) =
170
+ �u ∈ Rdst : ∇V (u) = 0 and D2V (u) > 0
171
+ � .
172
+ Throughout all the paper, the words minimum point will be used to denote a local or
173
+ global minimum point of a (potential) function.
174
+ Definition 1.2. A (global) solution (0, +∞) → Rdst, r �→ u(r), of the differential system
175
+ (1.2) (in particular a radially symmetric stationary solution of system (1.1)) is said to be
176
+ stable at infinity if u(r) approaches a point of Σmin(V ) as r goes to +∞. If this point of
177
+ Σmin(V ) is denoted by u∞, then the solution is said to be stable close to u∞ at infinity.
178
+ Notation. For every u∞ in Σmin(V ), let SV, u∞ denote the set of the radially symmetric
179
+ stationary solutions of system (1.1) that are stable close to u∞ at infinity. With symbols,
180
+ SV, u∞ =
181
+ �u : [0, +∞) → Rdst : u satisfies (1.2) and (1.3) and u(r) −−−−→
182
+ r→+∞ u∞
183
+ � .
184
+ Let
185
+ S0
186
+ V, u∞ =
187
+
188
+ u(0) : u ∈ SV, u∞
189
+
190
+ ,
191
+ and let
192
+ (1.4)
193
+ SV =
194
+
195
+ u∞∈Σmin(V )
196
+ SV, u∞
197
+ and
198
+ S0
199
+ V =
200
+
201
+ u∞∈Σmin(V )
202
+ S0
203
+ V, u∞ .
204
+ The following statement is an equivalent (simpler) formulation of conclusion 2 of
205
+ Corollary 1.1.
206
+ Corollary 1.3. For a generic potential V , the subset S0
207
+ V of Rdst is discrete.
208
+ 1.3 Differential systems governing radially symmetric stationary solutions
209
+ The second-order differential system (1.2) is equivalent to the (non-autonomous) 2dst-
210
+ dimensional first order differential differential system
211
+ (1.5)
212
+
213
+
214
+
215
+
216
+
217
+ ˙u = v
218
+ ˙v = −dsp − 1
219
+ r
220
+ v + ∇V (u) .
221
+ Introducing the auxiliary variables τ and c defined as
222
+ (1.6)
223
+ τ = log(r)
224
+ and
225
+ c = 1
226
+ r ,
227
+ 4
228
+
229
+ the previous 2dst-dimensional differential system (1.5) is equivalent to each of the following
230
+ two 2dst + 1-dimensional autonomous differential systems:
231
+ (1.7)
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+ uτ = rv
240
+ vτ = −(dsp − 1)v + r∇V (u)
241
+ rτ = r ,
242
+ and
243
+ (1.8)
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+ ur = v
252
+ vr = −(dsp − 1)cv + ∇V (u)
253
+ cr = −c2 .
254
+ Remark. Integrating the third equations of systems (1.7) and (1.8) yields
255
+ r = r0eτ−τ0
256
+ and
257
+ 1
258
+ c − 1
259
+ c0
260
+ = r − r0 ,
261
+ and the parameters τ0 and c0 (which determine in each case the origin of “time”) do
262
+ not matter in principle, since those systems are autonomous. However, if the “initial
263
+ conditions” r0 and c0 are positive (which is true for the solutions that describe radially
264
+ symmetric stationary solutions of system (1.1)), it is natural to choose, in each case, the
265
+ origins of time according to equalities (1.6), that is :
266
+ τ0 = ln(r0)
267
+ and
268
+ c0 = 1
269
+ r0
270
+ .
271
+ Properties close to origin.
272
+ System (1.7) is relevant to provide an insight into the limit
273
+ system (1.5) as r goes to 0. The subspace R2dst × {0} (r equal to 0) is invariant by the
274
+ flow of this system, and the system reduces on this invariant subspace to
275
+ (1.9)
276
+
277
+ uτ = 0
278
+ vτ = −(dsp − 1)v ,
279
+ see figure 1.1. For every u0 in Rdst, the point (u0, 0Rdst, 0) is an equilibrium of sys-
280
+ tem (1.7); let us denote by W u, 0
281
+ V
282
+ (u0) the (one-dimensional) unstable manifold of this
283
+ equilibrium, for this system, let
284
+ (1.10)
285
+ W u, 0, +
286
+ V
287
+ (u0) = W u, 0
288
+ V
289
+ (u0) ∩
290
+ �R2dst × (0, +∞)
291
+ � ,
292
+ and let
293
+ W u, 0, +
294
+ V
295
+ =
296
+
297
+ u0∈Rdst
298
+ W u, 0, +
299
+ V
300
+ (u0) .
301
+ The subspace
302
+ (1.11)
303
+ Ssym = Rdst × {0Rdst} × {0}
304
+ of R2dst+1 can be seen as the higher space dimension analogue of the symmetry (reversibil-
305
+ ity) subspace Rdst × {0Rdst} of R2dst (which is relevant for symmetric standing pulses in
306
+ space dimension 1, see [1] and subsection 1.7 below); the set W u, 0, +
307
+ V
308
+ can be seen as the
309
+ unstable manifold of this subspace Ssym.
310
+ 5
311
+
312
+ Figure 1.1: Dynamics of the (equivalent) differential systems (1.7) (for r nonnegative
313
+ finite) and (1.8) (for c = 1/r nonnegative finite) in Rdst × Rdst × [0, +∞] (this domain is
314
+ three-dimensional if dst is equal to 1, as on the figure). For the limit differential system
315
+ (1.9) in the subspace r = 0 (in green), the trajectories are vertical and the solutions
316
+ converge towards the horizontal u-axis, defined as Ssym in (1.11), and which is the higher
317
+ space dimensional analogue of the symmetry subspace for symmetric standing pulses
318
+ in space dimension 1. The point u∞ is a local minimum point of V , so that the point
319
+ (u∞, 0Rdst) is a hyperbolic equilibrium for the limit differential system (1.12) in the
320
+ subspace c = 0 ⇐⇒ r = +∞ (in blue). Systems (1.7) and (1.8) are autonomous, but the
321
+ quantity r (the quantity c) goes monotonously from 0 to +∞ (from +∞ to 0) for all the
322
+ solutions in the subspace r > 0 ⇐⇒ c > 0, so that those solutions can be parametrized
323
+ with r (with c) as time. The unstable manifold W u, 0, +
324
+ V
325
+ (u0) is one-dimensional and is a
326
+ transverse intersection between the unstable set W u, 0, +
327
+ V
328
+ of the subspace {r = 0, v = 0Rdst}
329
+ and the centre stable manifold W cs, ∞, +
330
+ V
331
+ (u∞) of the equilibrium (u∞, 0Rdst, c = 0). To
332
+ prove the generic transversality of this intersection is the main goal of the paper. The
333
+ dotted red curve is the projection onto the (u, r)-subspace of this intersection. The part of
334
+ W cs, ∞, +
335
+ V
336
+ (u∞) which is displayed on the figure can also be seen as the local centre stable
337
+ manifold W cs, ∞, +
338
+ loc, V, ε1, c1(u∞) defined in (2.10) (with u∞ equal to the point u∞,1 introduced
339
+ there).
340
+ 6
341
+
342
+ wo
343
+ u
344
+ sym
345
+ om
346
+ L
347
+ 0
348
+ (8m)
349
+ loc, V, E1, C1
350
+ C1
351
+ L0=
352
+ u
353
+ 8
354
+ E1Properties close to infinity.
355
+ System (1.8) is relevant to provide an insight into the limit
356
+ system (1.5) as r goes to +∞. The subspace R2dst × {0} of R2dst+1 (c equal to 0, or
357
+ in other words r equal to +∞) is invariant by the flow of this system, and the system
358
+ reduces on this invariant subspace to
359
+ (1.12)
360
+
361
+ ur = v
362
+ vr = ∇V (u) .
363
+ For every u∞ in Σmin(V ), the point (u∞, 0Rdst, 0) is an equilibrium of system (1.8); let
364
+ us consider its global centre-stable manifold in R2dst × (0, +∞), defined as
365
+ (1.13)
366
+ W cs, ∞, +
367
+ V
368
+ (u∞) =
369
+
370
+ (u0, v0, c0) ∈ R2dst × (0, +∞) : the solution of system (1.8)
371
+ with initial condition (u0, v0, c0) at “time” r0 = 1/c0 is
372
+ defined up to +∞ and goes to (u∞, 0, 0) as r goes to +∞
373
+
374
+ .
375
+ This set W cs, ∞, +
376
+ V
377
+ (u∞) is a dst + 1-dimensional submanifold of R2dst × (0, +∞) (see
378
+ subsection 2.4).
379
+ Radially symmetric stationary solutions.
380
+ Let us consider the involution
381
+ ι : R2dst × (0, +∞) → R2dst × (0, +∞) ,
382
+ (u, v, r) �→ (u, v, 1/r) .
383
+ The following lemma, proved in subsection 2.1, formalizes the correspondence between
384
+ the radially symmetric stationary solutions stable at infinity for system (1.1) and the
385
+ manifolds defined above.
386
+ Lemma 1.4. Let u∞ be a point of Σmin(V ). A (global) solution [0, +∞) → Rdst, r �→ u(r)
387
+ of system (1.2) belongs to SV, u∞ if and only if its trajectory (in R2dst × (0, +∞))
388
+ (1.14)
389
+ ��u(r), ˙u(r), r
390
+ � : r ∈ (0, +∞)
391
+
392
+ belongs to the intersection
393
+ (1.15)
394
+ W u, 0, +
395
+ V
396
+ ∩ ι−1�W cs, ∞, +
397
+ V
398
+ (u∞)
399
+ � .
400
+ 1.4 Transversality of radially symmetric stationary solutions stable at infinity
401
+ Definition 1.5. Let u∞ be a point of Σmin(V ). A radially symmetric stationary solution
402
+ stable close to u∞ at infinity for system (1.1) (in other words, a function u of SV, u∞) is
403
+ said to be transverse if the intersection (1.15) is transverse, in R2dst × (0, +∞), along the
404
+ trajectory (1.14).
405
+ Remark. The natural analogue of radially symmetric stationary solutions stable at infinity
406
+ when space dimension dsp is equal to 1 are symmetric standing pulses stable at infinity
407
+ (see Definition 1.5 of [1]), and the natural analogue for such pulses of Definition 1.5 above
408
+ 7
409
+
410
+ is their elementarity, not their transversality (see Definition 1.4 and Definition 1.6 of [1]).
411
+ However, the transversality of a symmetric standing pulse (when the space dimension
412
+ dsp equals 1) makes little sense in higher space dimension, because of the singularity at r
413
+ equals 0 for the differential systems (1.2) and (1.5), or because of the related fact that
414
+ the subspace {r = 0} is invariant for the differential system (1.7). For that reason, the
415
+ adjective transverse (not elementary) is chosen to qualify the property considered in
416
+ Definition 1.5 above.
417
+ 1.5 The space of potentials
418
+ For the remaining of the paper, let us take and fix an integer k not smaller than 1. Let us
419
+ consider the space Ck+1
420
+ b
421
+ (Rdst, R) of functions Rd → R of class Ck+1 which are bounded,
422
+ as well as their derivatives of order not larger than k + 1, equipped with the norm
423
+ ∥W∥Ck+1
424
+ b
425
+ =
426
+ max
427
+ α multi-index, |α|≤k+1 ∥∂|α|
428
+ uαW∥L∞(Rd,R) ,
429
+ and let us embed the larger space Ck+1(Rdst, R) with the following topology: for V in
430
+ this space, a basis of neighbourhoods of V is given by the sets V + O, where O is an
431
+ open subset of Ck+1
432
+ b
433
+ (Rdst, R) embedded with the topology defined by ∥·∥Ck+1
434
+ b
435
+ (which can
436
+ be viewed as an extended metric). For comments concerning the choice of this topology,
437
+ see subsection 1.4 of [1].
438
+ 1.6 Main result
439
+ The following generic transversality statement is the main result of this paper.
440
+ Theorem 1 (generic transversality of radially symmetric stationary solutions stable at
441
+ infinity). There exists a generic subset G of
442
+
443
+ Ck+1(Rdst, R), ∥·∥Ck+1
444
+ b
445
+
446
+ such that, for every
447
+ potential function V in G, every radially symmetric stationary solution stable at infinity
448
+ of the parabolic system (1.1) is transverse.
449
+ Theorem 1 can be viewed as the extension to higher space dimensions (for radially
450
+ symmetric solutions) of conclusion 2 of Theorem 1.7 of [1] (which is concerned with
451
+ elementary standing pulses stable at infinity in space dimension 1). A short comparison
452
+ between these two results and their proofs is provided in the next subsection. For more
453
+ comments and a short historical review on transversality results in similar contexts, see
454
+ subsection 1.6 of the same reference.
455
+ The core of the paper (section 4) is devoted to the proof of the conclusions of Theorem 1
456
+ among potentials which are quadratic past a certain radius (defined in (3.2)), as stated
457
+ in Proposition 4.1. The extension to general potentials of Ck+1
458
+ b
459
+ (Rdst, R) is carried out in
460
+ section 5.
461
+ Remark. As in [1] (see Theorem 1.8 of that reference), the same arguments could be
462
+ called upon to prove that the following additional conclusions hold, generically with
463
+ respect to the potential V :
464
+ 8
465
+
466
+ 1. for every minimum point of V , the smallest eigenvalue of D2V at this minimum
467
+ point is simple;
468
+ 2. every radially symmetric stationary solution stable at infinity of the parabolic system
469
+ (1.1) approaches its limit at infinity tangentially to the eigenspace corresponding to
470
+ the smallest eigenvalue of D2V at this point.
471
+ 1.7 Key differences with the generic transversality of standing pulses in
472
+ space dimension one
473
+ Table 1.1 lists the key differences between the proof of the generic elementarity of
474
+ symmetric standing pulses carried out in [1], and the proof of the generic transversality
475
+ of radially symmetric stationary solutions carried out in the present paper (implicitly,
476
+ the other steps/features of the proofs are similar or identical). The state dimension,
477
+ which is simply denoted by d in [1], is here denoted by dst in both cases. Some of the
478
+ notation/rigour is lightened.
479
+ Symmetric standing pulse
480
+ Radially symmetric
481
+ stationary solution
482
+ Critical point at infinity
483
+ critical point e, E = (e, 0Rdst )
484
+ minimum point u∞
485
+ Symmetry subspace Ssym
486
+ {(u, v) ∈ R2dst : v = 0},
487
+ dimension dst
488
+ {(u, v, r) ∈ R2dst+1 : (v, r) = (0, 0)},
489
+ dimension dst
490
+ Differential system
491
+ governing the profiles
492
+ autonomous, conservative,
493
+ regular at Ssym
494
+ non-autonomous, dissipative,
495
+ singular at reversibility subspace
496
+ Direction of the flow
497
+ E → Ssym
498
+ Ssym → u∞
499
+ Invariant manifold at
500
+ infinity
501
+ W u(E), dimension dst − m(e)
502
+ W cs, ∞, +(u∞), dimension dst + 1
503
+ Invariant manifold at
504
+ symmetry subspace
505
+ none
506
+ W u, 0, +, dimension dst + 1
507
+ Transversality
508
+ W u(E) ⋔ Ssym
509
+ W cs, ∞, +(u∞) ⋔ W u, 0, +
510
+ Transversality of spatially
511
+ homogeneous solutions
512
+ irrelevant
513
+ Proposition 2.2
514
+ Interval Ionce (values
515
+ reached only once)
516
+ “anywhere”
517
+ close to Ssym
518
+ M (departure set of Φ)
519
+ parametrization of ∂W u
520
+ loc, V (E)
521
+ and time, dimension dst − m(e)
522
+ Ssym and W cs, ∞, +
523
+ loc
524
+ (u∞) at r = N,
525
+ dimension 2dst
526
+ N (arrival set of Φ)
527
+ R2dst
528
+ R2dst × R2dst
529
+ W (target manifold)
530
+ Ssym
531
+ diagonal of N
532
+ dim(M) − codim(W)
533
+ −m(e)
534
+ 0
535
+ Condition to be fulfilled
536
+ by perturbation W
537
+
538
+ DΦ(W)
539
+ �� (0, ψ)�
540
+ ̸= 0
541
+
542
+ DΦu(W)
543
+ �� (φ, ψ)�
544
+ ̸= 0
545
+ Perturbation W, case 3
546
+ precluded
547
+ W(u0) ̸= 0
548
+ Table 1.1: Formal comparison between the generic elementarity of symmetric standing
549
+ pulses (space dimension 1) proved in [1], and the generic transversality of radially
550
+ symmetric stationary solutions (higher space dimension dsp) proved in the present paper.
551
+ Here are a few additional comments about these differences.
552
+ 9
553
+
554
+ Concerning the critical point at infinity, u∞ is assumed (here) to be a minimum point,
555
+ whereas (in [1]) the Morse index of e is any. Indeed, if the Morse index m(u∞) of u∞ was
556
+ positive, then the dimension of the centre-stable manifold W cs, ∞, +
557
+ V
558
+ (u∞) would be equal
559
+ to dst + m(u∞) + 1; as a consequence, proving the transversality of the intersection (1.15)
560
+ in that case would require more stringent regularity assumptions on V (see hypothesis
561
+ 1 of Theorem 4.2 of [1]) while nothing particularly useful could be derived from this
562
+ transversality. On the other hand, assuming that u∞ is a minimum point allows to view
563
+ its local centre-stable manifold as a graph (u, c) �→ v (see Proposition 2.4), which is
564
+ slightly simpler.
565
+ Concerning the interval Ionce providing values u reached “only once” by the profile
566
+ (Lemma 2.3), the proof of the present paper takes advantage of the dissipation to find a
567
+ convenient interval close to the “departure point” u0, as was done in [1] for travelling
568
+ fronts (whereas, for standing pulse, the interval is to be found “anywhere”, thanks to the
569
+ conservative nature of the differential system governing the profiles, see conclusion 1 of
570
+ Proposition 3.3 of [1]).
571
+ Concerning the function Φ to which Sard–Smale theorem is applied in the present
572
+ paper, both manifolds W u, 0, + and W cs, ∞, +(u∞) depend on the potential V . However,
573
+ the transversality of an intersection between these two manifolds can be seen as the
574
+ transversality of the image of Φ with the (fixed) diagonal of R2dst × R2dst, for a function
575
+ Φ combining the parametrization of these two manifolds. This trick, which is the same
576
+ as in [1] for travelling fronts, allows to apply Sard–Smale theorem to a function Φ with a
577
+ fixed arrival space N containing a fixed target manifold W (in this case the diagonal of
578
+ N). By contrast, for symmetric standing pulses in [1], since the subspace Ssym involved
579
+ in the transverse intersection is fixed, the previous trick is unnecessary and the setting is
580
+ simpler.
581
+ Finally, a technical difference occurs in “case 3” of the proof that the degrees of freedom
582
+ provided by perturbing the potential allow to reach enough directions in the arrival state
583
+ of Φ (Lemma 4.6, which is the core of the proof). In [1], case 3 is shown to lead to a
584
+ contradiction, not only for symmetric standing pulses, but also for asymmetric ones and
585
+ for travelling fronts. Here, such a contradiction does not seem to occur (or at least is
586
+ more difficult to prove), but this has no harmful consequence: a suitable perturbation of
587
+ the potential can still be found in this case.
588
+ 2 Preliminary properties
589
+ 2.1 Proof of Lemma 1.4
590
+ Let V denote a potential function in Ck+1(Rdst, R). Let (0, +∞) → Rdst, r �→ u(r) denote
591
+ a (global) solution of system (1.2), assumed to be stable close to some point u∞ of
592
+ Σmin(V ) at infinity (Definition 1.2). Lemma 1.4 follows from the next lemma.
593
+ Lemma 2.1. The derivative ˙u(r) goes to 0 as r goes to +∞.
594
+ 10
595
+
596
+ Proof. Let us consider the Hamiltonian function
597
+ (2.1)
598
+ HV : R2dst → R ,
599
+ (u, v) �→ v2
600
+ 2 − V (u) ,
601
+ and, for every r in (0, +∞), let
602
+ h(r) = HV
603
+ �u(r), ˙u(r)
604
+ � .
605
+ It follows from system (1.2) that, for every r in (0, +∞),
606
+ (2.2)
607
+ ˙h(r) = −dsp − 1
608
+ r
609
+ ˙u(r)2 ,
610
+ thus the function h(·) decreases, and it follows from the expression (2.1) of the Hamiltonian
611
+ that this function converges, as r goes to +∞, towards a finite limit h∞ which is not
612
+ smaller than −V (u∞).
613
+ Let us proceed by contradiction and assume that h∞ is larger than −V (u∞). Then, it
614
+ follows again from the expression (2.1) of the Hamiltonian that the quantity ˙u(r)2 con-
615
+ verges towards the positive quantity 2
616
+ �h∞ + V (u∞)
617
+ � as r goes to +∞. As a consequence,
618
+ it follows from equality (2.2) that h(r) goes to −∞ as r goes to +∞, a contradiction.
619
+ Lemma 2.1 is proved.
620
+ 2.2 Transversality of homogeneous radially symmetric stationary solutions
621
+ stable at infinity
622
+ Proposition 2.2. For every potential function V in Ck+1(Rdst, R) and for every nonde-
623
+ generate minimum point u∞ of V , the constant function
624
+ [0, +∞) → Rdst ,
625
+ r �→ u∞ ,
626
+ which defines an (homogeneous) radially symmetric stationary solution stable at infinity
627
+ for system (1.1) , is transverse (in the sense of Definition 1.5).
628
+ Proof. Let V denote a function in Ck+1(Rdst, R) and u∞ denote a nondegenerate minimum
629
+ point of V . The function [0, +∞) → Rdst, r �→ u∞ is a (constant) solution of the
630
+ differential system (1.5), and the linearization of this differential system around this
631
+ solution reads
632
+ (2.3)
633
+ ¨u = −dsp − 1
634
+ r
635
+ ˙u + D2V (u∞) · u .
636
+ Let (0, +∞) → Rdst, r �→ u(r) denote a nonzero solution of this differential system, and,
637
+ for every r in (0, +∞), let
638
+ v(r) = ˙u(r)
639
+ and
640
+ U(r) =
641
+ �u(r), v(r)
642
+
643
+ and
644
+ q(r) = u(r)2
645
+ 2
646
+ .
647
+ 11
648
+
649
+ Then (omitting the dependency on r),
650
+ ˙q = u · ˙u
651
+ and
652
+ ¨q = ˙u2 + u · ¨u = ˙u2 − dsp − 1
653
+ r
654
+ ˙q + D2V (u∞) · (u, u) ,
655
+ so that
656
+ d
657
+ dr
658
+ �rdsp−1 ˙q(r)
659
+ � = rdsp−1
660
+
661
+ ¨q + dsp − 1
662
+ r
663
+ ˙q
664
+
665
+ = rdsp−1� ˙u2 + D2V (u∞) · (u, u)
666
+ � .
667
+ Since r �→ u(r) was assumed to be nonzero, it follows that the quantity rdsp−1 ˙q(r) is
668
+ strictly increasing on (0, +∞). To prove the intended conclusion, let us proceed by
669
+ contradiction and assume that, for every r in (0, +∞),
670
+ �u(r), v(r), r
671
+ � belongs:
672
+ 1. to the tangent space T(u∞,0Rdst ,r)W u, 0, +
673
+ V
674
+ (u∞),
675
+ 2. and to the tangent space T(u∞,0Rdst ,r)
676
+
677
+ ι−1�W cs, ∞, +
678
+ V
679
+ (u∞)
680
+ ��
681
+ .
682
+ As in (1.6), let us introduce the auxiliary variables τ (equal to log(r)) and c (equal to
683
+ 1/r). With this notation, system (2.3) is equivalent to
684
+ (2.4)
685
+
686
+
687
+
688
+
689
+
690
+
691
+
692
+ uτ = rv
693
+ vτ = −(dsp − 1)v + rD2V (u∞) · u
694
+ rτ = r ,
695
+ and to
696
+ (2.5)
697
+
698
+
699
+
700
+
701
+
702
+
703
+
704
+ ur = v
705
+ vr = −(dsp − 1)cv + D2V (u∞) · u
706
+ cr = −c2 .
707
+ Assumptions 1 and 2 above yield the following conclusions.
708
+ 1. In view of the limit of system (2.4) as r goes to 0+, it follows from assumption 1
709
+ that there exists δu0 in Rdst such that
710
+ �u(r), v(r)
711
+ � goes to (δu0, 0Rdst) as r goes to
712
+ 0+;
713
+ 2. and in view of the limit of system (2.5) as c goes to 0+, it follows from assumption
714
+ 2 that
715
+ �u(r), v(r)
716
+ � goes to (0Rdst, 0Rdst), at an exponential rate, as r goes to +∞.
717
+ It follows from these two conclusions that the quantity rdsp−1 ˙q(r) goes to 0 as r goes
718
+ to 0+ and as r goes to +∞, a contradiction with the fact (observed above) that this
719
+ quantity is strictly increasing with r. Proposition 2.2 is proved.
720
+ 12
721
+
722
+ 2.3 Additional properties close to the origin
723
+ Let V denote a potential function in Ck+1(Rdst, R) and let u0 be a point in Rdst. Let
724
+ us recall (see subsection 1.3) that the unstable manifold W u, 0
725
+ V
726
+ (u0) of the equilibrium
727
+ (u0, 0Rdst, 0) for the autonomous differential system (1.7)) is one-dimensional.
728
+ As a
729
+ consequence there exists a unique solution r �→ u(r) of the differential system (1.2) such
730
+ that the image of the map r �→
731
+ �u(r), ˙u(r), r) lies in the intersection W u, 0, +
732
+ V
733
+ (u0) of this
734
+ unstable manifold with the half-space where r is positive (this intersection was defined in
735
+ (1.10)); or, in other words, such that
736
+ �u(r), ˙u(r)
737
+ � goes to (u0, 0) as r goes to 0+. This
738
+ solution is defined on some (maximal) interval (0, rmax), where rmax is either a finite
739
+ quantity or +∞. The following lemma provides properties of this solution that will be
740
+ used in the sequel. To ease its statement, let us assume that rmax is equal to +∞ (only
741
+ this case will turn out to be relevant), and let us consider the continuous extension of
742
+ u(·) to the interval [0, +∞) (and let us still denote by u(·) this continuous extension).
743
+ Lemma 2.3. If u(·) is not identically equal to u0 (in other words, if u0 is not a critical
744
+ point of V ), then there exists a positive quantity ronce such that, denoting by Ionce the
745
+ interval [0, ronce), the following conclusions hold:
746
+ 1. the function ˙u(·) does not vanish on Ionce,
747
+ 2. and, for every r∗ in Ionce and r in [0, +∞),
748
+ u(r) = u(r∗) =⇒ r = r∗.
749
+ Proof. The linearized system (1.7) at the equilibrium (u0, 0Rdst, 0) reads:
750
+ d
751
+
752
+
753
+
754
+
755
+ δu
756
+ δv
757
+ δr
758
+
759
+
760
+ � =
761
+
762
+
763
+
764
+ 0
765
+ 0
766
+ 0
767
+ 0
768
+ −(dsp − 1)
769
+ ∇V (u0)
770
+ 0
771
+ 0
772
+ 1
773
+
774
+
775
+
776
+
777
+
778
+
779
+ δu
780
+ δv
781
+ δr
782
+
783
+
784
+ � ,
785
+ thus the tangent space at (u0, 0Rdst, 0) to W u, 0
786
+ V
787
+ (u0) (the unstable eigenspace of the matrix
788
+ of this system) is spanned by the vector
789
+ �0, ∇V (u0)/dsp, 1
790
+ �; it follows that
791
+ (2.6)
792
+ ˙u(r) = r
793
+ dsp
794
+ ∇V (u0)
795
+ �1 + or→0+(r)
796
+ � .
797
+ Thus, if r0 is a sufficiently small positive quantity, then ˙u(·) does not vanish on (0, r0]
798
+ (so that conclusion 1 of Lemma 2.3 holds provided that ronce is not larger than r0), and
799
+ the map
800
+ (2.7)
801
+ [0, r0] → Rdst ,
802
+ r �→ u(r)
803
+ is a C1-diffeomorphism onto its image. For r in [0, +∞), let us denote
804
+ �u(r), ˙u(r)
805
+ � by
806
+ U(r). According to the decrease (2.2) of the Hamiltonian, there exists a quantity ronce in
807
+ (0, r0) such that, for every r∗ in [0, ronce),
808
+ (2.8)
809
+ HV
810
+ �U(r0)
811
+ � < −V
812
+ �u(r∗)
813
+ � .
814
+ 13
815
+
816
+ Take r∗ in [0, ronce] and r in [0, +∞), and let us assume that u(r) equals u(r∗). If r was
817
+ larger than r0 then it would follow from the expression (2.1) of the Hamiltonian, its
818
+ decrease (2.2), and inequality (2.8) that
819
+ −V
820
+ �u(r)
821
+ � ≤ HV
822
+ �U(r)
823
+ � ≤ HV
824
+ �U(r0)
825
+ � < −V
826
+ �u(r∗)
827
+ � ,
828
+ a contradiction with the equality of u(r) and u(r∗). Thus r is not larger than r0, and
829
+ it follows from the one-to-one property of the function (2.7) that r must be equal to
830
+ r∗; conclusion 2 of Lemma 2.3 thus holds, and Lemma 2.3 is proved.
831
+ 2.4 Additional properties close to infinity
832
+ Let V1 denote a potential function in Ck+1(Rdst, R) and u1,∞ denote a nondegenerate
833
+ minimum point of V1. According to the implicit function theorem, there exists a (small)
834
+ neighbourhood νrobust(V1, u1,∞) of Vquad-R and a Ck-function V �→ u∞(V ) defined on
835
+ νrobust(V1, u1,∞) and with values in Rdst such that u∞(V1) equals u1,∞ and, for every
836
+ V in νrobust(V1, u1,∞), u∞(V ) is a local minimum point of V . The following proposi-
837
+ tion is nothing but the local centre-stable manifold theorem applied to the equilibrium
838
+ �u∞(V ), 0Rdst, 0
839
+ � of the (autonomous) differential system (1.8), for V close to V1. Addi-
840
+ tional comments and references concerning local stable/centre/unstable manifolds are
841
+ provided in subsection 2.2 of [1].
842
+ Proposition 2.4 (local centre-stable manifold at infinity). There exist a neighbourhood
843
+ ν of V1 in Ck+1(Rdst, R), included in νrobust(V1, u1,∞), such that, if ε1 and c1 denote
844
+ sufficiently small positive quantities, then, for every V in ν, there exists a Ck-map
845
+ (2.9)
846
+ wcs, ∞
847
+ loc, V : BRdst(u1,∞, ε1) × [0, c1] → Rdst ,
848
+ (u, c) �→ wcs, ∞
849
+ loc, V (u, c) ,
850
+ such that, for every (u0, v0, c0) in BRdst(u1,∞, ε1) × Rdst × [0, c1], the following two
851
+ statements are equivalent:
852
+ 1. v = wcs, ∞
853
+ loc, V (u, c);
854
+ 2. the solution r �→
855
+ �u(r), v(r), c(r)
856
+ � of the differential system (1.8) with initial condi-
857
+ tion (u0, v0, c0) at time r0 = 1/c0 is defined up to +∞, remains in BRdst(u1,∞, ε1)×
858
+ Rdst × [0, c1] of all r larger than r0, and goes to
859
+ �u∞(V ), 0Rdst, 0
860
+ � as r goes to +∞.
861
+ In particular, wcs, ∞
862
+ loc, V
863
+ �u∞(V ), 0
864
+ � is equal to 0Rdst. In addition, the map
865
+ BRdst(u1,∞, ε1) × [0, c1] × ν → Rdst ,
866
+ (u, c, V ) �→ wcs, ∞
867
+ loc, V (u, c)
868
+ is of class Ck (with respect to u and c and V ), and, for every V in ν, the graph of the
869
+ differential at
870
+ �u∞(V ), 0) of the map (u, c) �→ wcs, ∞
871
+ loc, V (u, c) is equal to the centre-stable
872
+ subspace of the linearization at
873
+ �u∞(V ), 0Rdst, 0
874
+ � of the differential system (1.8).
875
+ 14
876
+
877
+ Let us denote by W cs, ∞, +
878
+ loc, V, ε1, c1
879
+ �u∞(V )
880
+ � the graph of the map (2.9) (restricted to positive
881
+ values of c), see figure 1.1; with symbols,
882
+ (2.10)
883
+ W cs, ∞, +
884
+ loc, V, ε1, c1
885
+ �u∞(V )
886
+ � =
887
+ ��u, wcs, ∞
888
+ loc, V (u, c), c
889
+ � : (u, c) ∈ BRdst(u1,∞, ε1) × (0, c1]
890
+
891
+ .
892
+ This set defines a local centre-manifold (restricted to positive values of c) for the equilib-
893
+ rium
894
+ �u∞(V ), 0Rdst, 0
895
+ � of the differential system (1.8). Its uniqueness (for positive values
896
+ of c) is ensured by the dynamics of the centre component c, which, according to the
897
+ third equation of system (1.8), decreases to 0 (see figure 1.1). The global centre-stable
898
+ manifold W cs, ∞, +
899
+ V
900
+ �u∞(V )
901
+ � already defined in (1.13) can be redefined as the points of
902
+ R2dst ×(0, +∞) that eventually reach the local centre manifold W cs, ∞, +
903
+ loc, V, ε1, c1
904
+ �u∞(V )
905
+ � when
906
+ they are transported by the flow of the differential system (1.8).
907
+ Remark. If the state dimension dst is equal to 1, then a calculation shows that
908
+ wcs, ∞
909
+ loc, V (u, c) = −
910
+ �u − u∞(V )
911
+ � ��
912
+ V ′′�u∞(V )
913
+ � + dsp − 1
914
+ 2
915
+ c + . . .
916
+
917
+ ,
918
+ where “. . . ” stands for higher order terms in u − u∞(V ) and c. In particular the quantity
919
+ ∂c∂uwcs, ∞
920
+ loc, V
921
+ �u∞(V ), 0
922
+ � is equal to the (negative) quantity −(dsp − 1)/2. The display of
923
+ the local centre-stable manifold at infinity on figure 1.1 fits with the sign of this quantity.
924
+ 3 Tools for genericity
925
+ Let
926
+ (3.1)
927
+ Vfull = Ck+1(Rdst, R) ,
928
+ and, for a positive quantity R, let
929
+ (3.2)
930
+ Vquad-R =
931
+
932
+ V ∈ Vfull : for all u in Rd, |u| ≥ R =⇒ V (u) = u2
933
+ 2
934
+
935
+ .
936
+ Let us recall the notation SV introduced in (1.4).
937
+ Lemma 3.1. For every positive quantity R and for every potential V in Vquad-R, the
938
+ following conclusions hold.
939
+ 1. The flow defined by the differential system (1.2) (governing radially symmetric
940
+ stationary solutions of the parabolic system (1.1)) is global (that is, every solution
941
+ is defined on (0, +∞)).
942
+ 2. For every u in SV , the following bound holds:
943
+ (3.3)
944
+ sup
945
+ r∈(0,+∞)
946
+ |u(r)| < R .
947
+ 15
948
+
949
+ Proof. Let V be in Vquad-R. According to the definition (3.2) of Vquad-R, there exists a
950
+ positive quantity K such that, for every u in Rdst,
951
+ |∇V (u)| ≤ K + |u| .
952
+ As a consequence, the following inequalities hold for the right-hand side of the first order
953
+ differential system (1.5):
954
+ ����
955
+
956
+ v, −dsp − 1
957
+ r
958
+ v + ∇V (u)
959
+ ����� ≤ |v| + dsp − 1
960
+ r
961
+ |v| + K + |u| ≤ K +
962
+
963
+ 2 + dsp − 1
964
+ r
965
+
966
+ |(u, v)| ,
967
+ and this bound prevents the solution from blowing up in finite time, which proves
968
+ conclusion 1.
969
+ Now, take a function u in SV . Let us still denote by u(·) the continuous extension of
970
+ this solution to [0, +∞). For every r in [0, +∞), let
971
+ q(r) = u(r)2
972
+ 2
973
+ and
974
+ Q(r) = rdsp−1 ˙q(r) .
975
+ Then (omitting the dependency on r),
976
+ ˙q = u · ˙u
977
+ and
978
+ ¨q = ˙u2 + u · ¨u = ˙u2 − dsp − 1
979
+ r
980
+ ˙q + u · ∇V (u) ,
981
+ so that
982
+ ˙Q = rdsp−1
983
+
984
+ ¨q + dsp − 1
985
+ r
986
+ ˙q
987
+
988
+ = rdsp−1� ˙u2 + u · ∇V (u)
989
+ � .
990
+ According to the definition (3.2) of Vquad-R, there exists a positive quantity δ (sufficiently
991
+ small) so that, for every w in Rdst,
992
+ (3.4)
993
+ |w| ≥ R − δ =⇒ w · ∇V (w) ≥ w2
994
+ 2 .
995
+ Let us proceed by contradiction and assume that supr∈(0,+∞) |u(r)| is not smaller than
996
+ R. Since u(·) is stable at infinity and since the critical points of V belong to the open
997
+ ball BRdst(0, R − δ), it follows that the set
998
+ �r ∈ [0, +∞) : |u(r)| ≥ R
999
+
1000
+ is nonempty; let rout denote the minimum of this set. For the same reason, the set
1001
+ �r ∈ (rout, +∞) : |u(r)| < R − δ
1002
+
1003
+ is also nonempty. Let rback denote the infimum of this last set. It follows from these
1004
+ definitions that rback is larger than rout and that, for every r in (rout, rback), according to
1005
+ inequality (3.4),
1006
+ (3.5)
1007
+ ˙Q(r) ≥ rdsp−1
1008
+
1009
+ ˙u2(r) + u2(r)
1010
+ 2
1011
+
1012
+ > 0 .
1013
+ 16
1014
+
1015
+ If on the one hand rout equals 0 then |u(0)| is not smaller than R and, since Q(0) equals
1016
+ 0, it follows from inequality (3.5) that Q(·) is positive on (0, rback), so that the same is
1017
+ true for ˙q(·). Thus q(·) is strictly increasing on [0, rback] and |u(rback)| must be larger
1018
+ than |u(rout)|, a contradiction with the definition of rback. If on the other hand rout is
1019
+ positive, then |u(rout)| is equal to R and ˙q(rout) is nonnegative so that the same is true
1020
+ for Q(rout), and it again follows from inequality (3.5) that Q(·) is positive on (0, rback),
1021
+ yielding the same contradiction. Conclusion 2 of Lemma 3.1 is proved.
1022
+ Notation. For every positive quantity R and every potential V in Vquad-R, let
1023
+ (3.6)
1024
+ SV : (0, +∞)2 × R2dst → R2dst ,
1025
+ �(rinit, r), (uinit, vinit)
1026
+ � �→ SV
1027
+ �(rinit, r), (uinit, vinit)
1028
+
1029
+ denote the (globally defined) flow of the (non-autonomous) differential system (1.5) for
1030
+ this potential V . In other words, for every rinit in (0, +∞) and (uinit, vinit) in R2dst, the
1031
+ function
1032
+ (0, +∞) → R2dst ,
1033
+ r �→ SV
1034
+ �(rinit, r1), (uinit, vinit)
1035
+
1036
+ is the solution of the differential system (1.5) for the initial condition (uinit, vinit) at r
1037
+ equals rinit. According to subsection 1.3, the flow SV may be extended to the larger set
1038
+ (0, +∞)2 × R2dst ∪ [0, +∞)2 × Rdst × {0Rdst} ;
1039
+ according to this extension, for every u0 in Rdst, the solution taking its values in the
1040
+ (one-dimensional) unstable manifold W u, 0, +
1041
+ V
1042
+ (u0) reads:
1043
+ (3.7)
1044
+ [0, +∞) → Rdst ,
1045
+ r �→ SV
1046
+ �(0, r), (u0, 0Rdst)
1047
+ � .
1048
+ 4 Generic transversality among potentials that are quadratic
1049
+ past a given radius
1050
+ 4.1 Notation and statement
1051
+ Let us recall the notation SV and SV, u∞ introduced in (1.4).
1052
+ Proposition 4.1. There exists a generic subset of Vquad-R such that, for every potential
1053
+ V in this subset, every radially symmetric stationary solution stable at infinity of the
1054
+ parabolic system (1.1) (in other words, every u in SV ) is transverse.
1055
+ 4.2 Reduction to a local statement
1056
+ Let V1 denote a potential function in Vquad-R and u1,∞ denote a nondegenerate minimum
1057
+ point of V1. According to the implicit function theorem, there exists a (small) neighbour-
1058
+ hood νrobust(V1, u1,∞) of Vquad-R and a Ck-function u∞(·) defined on νrobust(V1, u1,∞) and
1059
+ with values in Rdst such that u∞(V1) equals u1,∞ and, for every V in νrobust(V1, u1,∞),
1060
+ u∞(V ) is a local minimum point of V . The following local generic transversality statement
1061
+ yields Proposition 4.1 (as shown below).
1062
+ 17
1063
+
1064
+ Proposition 4.2. There exists a neighbourhood νV1, u1,∞ of V1 in νrobust(V1, u1,∞) and
1065
+ a generic subset νV1, u1,∞, gen of νV1, u1,∞ such that, for every V in νV1, u1,∞, gen, every
1066
+ radially symmetric stationary solution stable close to u∞(V ) at infinity of the parabolic
1067
+ system (1.1) (in other words, every u in SV, u∞(V )) is transverse.
1068
+ Proof that Proposition 4.2 yields Proposition 4.1. Let us denote by Vquad-R-Morse the
1069
+ dense open subset of Vquad-R defined by the Morse property:
1070
+ (4.1)
1071
+ Vquad-R-Morse = {V ∈ Vquad-R : all critical points of V are nondegenerate} .
1072
+ Let V1 denote a potential function in Vquad-R-Morse. According to the Morse property
1073
+ its minimum points are isolated and since V1 is in Vquad-R they belong to the open ball
1074
+ BRd(0, R), so that those minimum points are in finite number. Assume that Proposi-
1075
+ tion 4.2 holds. With the notation of this proposition, let us consider the following two
1076
+ intersections, at each time over all minimum points u1,∞ of V1:
1077
+ (4.2)
1078
+ νV1 =
1079
+
1080
+ νV1, u1,∞
1081
+ and
1082
+ νV1, gen =
1083
+
1084
+ νV1, u1,∞, gen .
1085
+ Since those are finite intersections, νV1 is still a neighbourhood of V1 in Vquad-R and the
1086
+ set νV1, gen is still a generic subset of νV1. This shows that the set
1087
+ {V ∈ Vquad-R-Morse :
1088
+ every u in SV, u∞(V ) is transverse}
1089
+ is locally generic. Applying Lemma 4.3 of [1] as in Subsection 5.2 of this reference shows
1090
+ that this local genericity implies the global genericity stated in Proposition 4.1, which is
1091
+ therefore proved.
1092
+ 4.3 Proof of the local statement (Proposition 4.2)
1093
+ 4.3.1 Setting
1094
+ For the remaining part of this section, let us fix a potential function V1 in Vquad-R and a
1095
+ nondegenerate minimum point u1,∞ of V1. Let ν be a neighbourhood of V1 in Vquad-R,
1096
+ included in νrobust(V1, u1,∞), and let ε1 and c1 be positive quantities, with ν and ε1 and
1097
+ c1 small enough so that the conclusions of Proposition 2.4 hold. Let
1098
+ r1 = 1/c1
1099
+ and
1100
+ M = Rdst × BRdst(u1,∞, ε1)
1101
+ and
1102
+ Λ = ν ,
1103
+ and
1104
+ N = (R2dst)2
1105
+ and
1106
+ W = {(A, B) ∈ N : A = B}
1107
+ ,
1108
+ thus W is the diagonal of N. Let N denote an integer not smaller than r1, and let us
1109
+ consider the functions
1110
+ Φu : Rdst × Λ → R2dst ,
1111
+ (u0, V ) �→ SV
1112
+ �(0, N), (u0, 0Rdst)
1113
+ � ,
1114
+ and
1115
+ Φcs : BRdst(u1,∞, ε1) × Λ → R2dst ,
1116
+ (uN, V ) �→
1117
+ �uN, wcs, ∞
1118
+ loc, V (uN, 1/N)
1119
+ � ,
1120
+ and the function
1121
+ (4.3)
1122
+ Φ : M × Λ → N ,
1123
+ (m, V ) = (u0, uN, V ) �→
1124
+ �Φu(u0, V ), Φcs(uN, V )
1125
+ � .
1126
+ 18
1127
+
1128
+ 4.3.2 Equivalent characterizations of transversality
1129
+ Let us consider the set
1130
+ SΛ,u1,∞,N =
1131
+ �(V, u) : V ∈ Λ and u ∈ SV, u∞(V ) and u(N) ∈ BRdst(u1,∞, ε1)
1132
+ � .
1133
+ Proposition 4.3. The map
1134
+ (4.4)
1135
+ Φ−1(W) → SΛ,u1,∞,N ,
1136
+ (u0, u, V ) �→
1137
+
1138
+ V, r �→ SV
1139
+ �(0, r), (u0, 0Rdst
1140
+ ��
1141
+ is well defined and one-to-one.
1142
+ Proof. The image by Φ of a point (u0, uN, V ) of M × Λ belongs to the diagonal W of
1143
+ N if and only if Φu(u0, V ) equals Φcs(uN, V ), and in this case the function u : r �→
1144
+ SV
1145
+ �(0, r), (u0, 0Rdst
1146
+ � belongs to SV, u∞(V ) and u(N) (which is equal to uN) belongs to
1147
+ BRdst(u1,∞, ε1), so that (V, u) belongs to SΛ,u1,∞,N. The map (4.4) above is thus well
1148
+ defined.
1149
+ Now, for every (V, u) in SΛ,u1,∞,N, if we denote by u0 the limit limr→0+ u(r) and by
1150
+ uN the vector u(N), then (u0, uN, V ) is the only possible antecedent of (V, u) by the map
1151
+ (4.4). In addition,
1152
+ SV
1153
+ �(0, N), (u0, 0Rdst)
1154
+ � =
1155
+ �uN, ˙u(N)
1156
+ � ,
1157
+ and since u(r) goes to u∞(V ) as r goes to +∞, the vector
1158
+ �u(N), ˙u(N), 1/N
1159
+ � must
1160
+ belong to the centre-stable manifold W cs, ∞, +
1161
+ V
1162
+ �u∞(V )
1163
+ � of u∞(V ), so that, according to
1164
+ the definition of wcs, ∞
1165
+ loc, V ,
1166
+ ˙u(N) = wcs, ∞
1167
+ loc, V
1168
+ �u(N), 1/N
1169
+ � ,
1170
+ and this yields the equality between Φu(u0, V ) and Φcs(uN, V ). Thus Φ(V, u) belongs to
1171
+ W and (u0, uN, V ) belongs to Φ−1(W). Proposition 4.3 is proved.
1172
+ Proposition 4.4. For every potential function V in Λ, the following two statements are
1173
+ equivalent.
1174
+ 1. The image of the function M → N, m �→ Φ(m, V ) is transverse to W.
1175
+ 2. Every u in SV, u∞(V ) such that u(N) is in BRdst(u1,∞, ε1) is transverse.
1176
+ Remark. According to Proposition 2.2, for every V in Λ, the constant function r �→ u∞(V ),
1177
+ which belongs to SV , is already (a priori) known to be transverse, therefore only
1178
+ nonconstant solutions matter in statement 2 of this proposition.
1179
+ Proof. Let us consider (m2, V2) in M × Λ such that Φ(m2, V2) is in W, let (u2,0, u2,N)
1180
+ denote the components of m2, and let r �→ u2(r) and r �→ U2(r) denote the functions
1181
+ satisfying, for all r in [0, +∞),
1182
+ U2(r) =
1183
+ �u2(r), ˙u2(r)
1184
+ � = SV
1185
+ �(0, r), (u2,0, 0Rdst
1186
+ � .
1187
+ Let us consider the map
1188
+ ∆Φ : M → R2dst ,
1189
+ (u0, uN) �→ Φu(u0, V2) − Φcs(uN, V2) ,
1190
+ 19
1191
+
1192
+ and let us write, only for this proof, DΦ and DΦu and DΦcs and D(∆Φ) for the
1193
+ differentials of Φ and Φu and Φcs and ∆Φ at (m2, V2) and with respect to all variables in
1194
+ M (but not with respect to V ). According to Definition 1.5, the transversality of u2 is
1195
+ defined as the transversality of the intersection W u, 0, +
1196
+ V2
1197
+ ∩ ι−1�
1198
+ W cs, ∞, +
1199
+ V2
1200
+ �u∞(V2)
1201
+ ��
1202
+ along
1203
+ the trajectory of U2. This transversality can be considered at a single point, no matter
1204
+ which, of the trajectory U2
1205
+ �(0, +∞)
1206
+ �, in particular at the point Φu(u2,0, V2) which is
1207
+ equal to Φcs�u2(N), V 2
1208
+ �, and is equivalent to the transversality of the dst-dimensional
1209
+ manifolds
1210
+ W u, 0, +
1211
+ V2
1212
+
1213
+ �R2dst × {N}
1214
+
1215
+ and
1216
+ ι−1�
1217
+ W cs, ∞, +
1218
+ V2
1219
+ �u∞(V2)
1220
+ ��
1221
+
1222
+ �R2dst × {N}
1223
+
1224
+ in R2dst ×{N}. It is therefore equivalent to the surjectivity of the map D(∆Φ) (statement
1225
+ (B) in Lemma 4.5 below). On the other hand, the image of the function M → N,
1226
+ m �→ Φ(m, V2) is transverse at Φ(m, V2) to the diagonal W of N if and only if the image
1227
+ of DΦ contains a complementary space of this diagonal (statement (A) in Lemma 4.5
1228
+ below)). Thus Proposition 4.4 is a consequence of the next lemma.
1229
+ Lemma 4.5. The following two statements are equivalent.
1230
+ (A) The image of DΦ contains a complementary subspace of the diagonal W of N.
1231
+ (B) The map D(∆Φ) is surjective.
1232
+ Proof. If statement (A) holds, then, for every (α, β) in N, there exist γ in R2dst and δm
1233
+ in Tm2M such that
1234
+ (4.5)
1235
+ (γ, γ) + DΦ · δm = (α, β) ,
1236
+ so that
1237
+ (4.6)
1238
+ D(∆Φ) · δm = α − β ,
1239
+ and statement (B) holds. Conversely, if statement (B) holds, then, for every (α, β) in
1240
+ N, there exists δm in Tm2M such that (4.6) holds, and as a consequence, if (δu0, δuN)
1241
+ denote the components of δm, then α − DΦu(δu0) is equal to β − DΦcs(δuN), and if
1242
+ this vector is denoted by γ, then equality (4.5) holds, and this shows that statement (A)
1243
+ holds.
1244
+ As explained above, Proposition 4.4 follows from Lemma 4.5, and is therefore proved.
1245
+ 4.3.3 Checking hypothesis 1 of Theorem 4.2 of [1]
1246
+ The function Φ is as regular as the flow SV , thus of class Ck. It follows from the definitions
1247
+ of M and N and W that
1248
+ dim(M) − codim(W) = (dst + dst) − 2dst = 0 ,
1249
+ so that hypothesis 1 of Theorem 4.2 of [1] is fulfilled.
1250
+ 20
1251
+
1252
+ 4.3.4 Checking hypothesis 2 of Theorem 4.2 of [1]
1253
+ For every V in Vquad-R, let us recall the notation SV introduced in (3.6) and (3.7) for the
1254
+ flow of the differential system (1.5). Take (m2, V2) in the set Φ−1(W). Let (u2,0, u2,N)
1255
+ denote the components of m2, and, for every r in (0, +∞), let us write
1256
+ U2(r) =
1257
+ �u2(r), v2(r)
1258
+ � = SV2
1259
+ �(0, r), (u2,0, 0Rdst)
1260
+ � .
1261
+ Let us write
1262
+
1263
+ and
1264
+ DΦu
1265
+ and
1266
+ DΦcs
1267
+ for the full differentials (with respect to arguments m in M and V in Λ) of the three
1268
+ functions Φ and Φu and Φcs respectively at the points
1269
+ �u2,0, u2,N, V2
1270
+ �,
1271
+ �u2,0, V2
1272
+ � and
1273
+ �u2,N, V2
1274
+ �. Checking hypothesis 2 of Theorem 4.2 of [1] amounts to prove that
1275
+ (4.7)
1276
+ im(DΦ) + W = N .
1277
+ If u2(·) is constant (that is, identically equal to u∞(V2)), then equality (4.7) follows from
1278
+ Proposition 2.2. Thus, let us assume that u2(·) is nonconstant. In this case, equality
1279
+ (4.7) is a consequence of the following lemma.
1280
+ Lemma 4.6. For every nonzero vector (φ2, ψ2) in R2dst, there exists a function W in
1281
+ Ck+1
1282
+ b
1283
+ (Rdst, R) such that
1284
+ supp(W) ⊂ BRd(0, R) ,
1285
+ (4.8)
1286
+ and
1287
+ �DΦu · (0, 0, W)
1288
+ �� (φ2, ψ2)
1289
+ � ̸= 0 ,
1290
+ (4.9)
1291
+ and
1292
+ DΦcs · (0, 0, W) = 0R2dst .
1293
+ (4.10)
1294
+ Proof that Lemma 4.6 yields equality (4.7). Inequality (4.9) shows that the orthogonal
1295
+ complement, in R2dst, of the directions that can be reached by DΦu·(0, 0, W) for potentials
1296
+ W satisfying (4.8) and (4.10) is reduced to 0R2dst; in other words, all directions of R2dst
1297
+ can be reached by that means. This shows that
1298
+ im(DΦ) ⊃ R2dst × {0R2dst} ,
1299
+ and since the subspace at the right-hand side of this inclusion is transverse to W in
1300
+ R4dst, this proves equality (4.7) (and shows that hypothesis 2 of Theorem 4.2 of [1] is
1301
+ fulfilled).
1302
+ Proof of Lemma 4.6. Let (φ2, ψ2) denote a nonzero vector in R2dst, let W be a function
1303
+ in Ck+1
1304
+ b
1305
+ (Rdst, R) satisfying the inclusion
1306
+ (4.11)
1307
+ supp(W) ⊂ BRd(0, R) \ BRdst(u1,∞, ε1) ,
1308
+ and observe that inclusion (4.8) and equality (4.10) follow from this inclusion (4.11). Let
1309
+ us consider the linearization of the differential system (1.2), for the potential V2, around
1310
+ the solution r �→ U2(r):
1311
+ (4.12)
1312
+ d
1313
+ dr
1314
+
1315
+ δu(r)
1316
+ δv(r)
1317
+
1318
+ =
1319
+
1320
+ 0
1321
+ id
1322
+ D2V2
1323
+ �u2(r)
1324
+
1325
+ −dsp−1
1326
+ r
1327
+ � �
1328
+ δu(r)
1329
+ δv(r)
1330
+
1331
+ ,
1332
+ 21
1333
+
1334
+ and let T(r, r′) denote the family of evolution operators obtained by integrating this
1335
+ linearized differential system between r and r′. It follows from the variation of constants
1336
+ formula that
1337
+ (4.13)
1338
+ DΦu · (0, 0, W) =
1339
+ � N
1340
+ −∞
1341
+ T(r, N)
1342
+
1343
+ 0, ∇W
1344
+ �u2(r)
1345
+ ��
1346
+ dr .
1347
+ For every r in (0, +∞), let T ∗(r, N) denote the adjoint operator of T(r, N), and let
1348
+ (4.14)
1349
+ �φ(r), ψ(r)
1350
+ � = T ∗(r, N) · (φ2, ψ2) .
1351
+ According to expression (4.13), inequality (4.9) reads
1352
+ � N
1353
+ −∞
1354
+ ��
1355
+ 0, ∇W
1356
+ �u2(r)
1357
+ �� ��� T ∗(r, N) · (φ2, ψ2)
1358
+
1359
+ dr ̸= 0 ,
1360
+ or equivalently
1361
+ (4.15)
1362
+ � N
1363
+ −∞
1364
+ ∇W
1365
+ �u2(r)
1366
+ � · ψ(r) dr ̸= 0 .
1367
+ Due to the expression of the linearized differential system (4.12), (φ, ψ) is a solution of
1368
+ the adjoint linearized system
1369
+ (4.16)
1370
+ � ˙φ(r)
1371
+ ˙ψ(r)
1372
+
1373
+ = −
1374
+
1375
+ 0
1376
+ D2V2
1377
+ �u2(r)
1378
+
1379
+ id
1380
+ −dsp−1
1381
+ r
1382
+ � �
1383
+ φ(r)
1384
+ ψ(r)
1385
+
1386
+ .
1387
+ According to Lemma 2.3 (and since u2(·) was assumed to be nonconstant), there exists
1388
+ positive quantity ronce such that, if we denote by Ionce the interval (0, ronce], then ˙u2(·)
1389
+ does not vanish on Ionce, and, for all r∗ in Ionce and r in R,
1390
+ (4.17)
1391
+ u2(r) = u2(r∗) =⇒ r = r∗ .
1392
+ In addition, up to replacing ronce by a smaller positive quantity, it may be assumed that
1393
+ the following conclusions hold:
1394
+ u2(Ionce) ∩ BRdst(u1,∞, ε1) = ∅ .
1395
+ To complete the proof three cases have to be considered.
1396
+ Case 1.
1397
+ There exists r∗ in Ionce such that ψ(r∗) is not collinear to ˙u2(r∗).
1398
+ In this case, the construction of a potential function W satisfying inclusion (4.11) and
1399
+ inequality (4.9) (and thus the conclusions of Lemma 4.6) is the same as in the proof of
1400
+ Lemma 5.7 of [1].
1401
+ If case 1 does not occur, then ψ(r) is collinear to ˙u2(r), and since ˙u2(·) does not vanish
1402
+ on Ionce, there exists a C1-function α : Ionce → R such that, for every r in Ionce,
1403
+ (4.18)
1404
+ ψ(r) = α(r) ˙u2(r) .
1405
+ The next cases 2 and 3 differ according to whether the function α(·) is constant or not.
1406
+ 22
1407
+
1408
+ Case 2.
1409
+ For every r in Ionce, equality (4.18) holds for some nonconstant function α(·).
1410
+ In this case there exists r∗ in Ionce such that ˙α(r∗) is nonzero, and again the construction
1411
+ of a potential function W satisfying inclusion (4.11) and inequality (4.9) (and thus the
1412
+ conclusions of Lemma 4.6) is the same as in the proof of Lemma 5.7 of [1].
1413
+ Case 3.
1414
+ For every r in Ionce, ψ(r) = α ˙u2(r) for some real (constant) quantity α.
1415
+ In this case the quantity α cannot be 0 or else, due to (4.16) and (4.18), both φ(·)
1416
+ and ψ(·) would identically vanish on Ionce and thus on (0, +∞), a contradiction with the
1417
+ assumptions of Lemma 4.6. Thus, without loss of generality, we may assume that α is
1418
+ equal to 1. If supp(W) is included in a sufficiently small neighbourhood of u2,0, then
1419
+ W(·) vanishes on u2
1420
+ �[ronce, N]
1421
+ � and the integral on the left-hand side of inequality (4.15)
1422
+ reads
1423
+ � ronce
1424
+ 0
1425
+ ∇W
1426
+ �u2(r)
1427
+ � · ˙u2(r) dr = W
1428
+ �u2(ronce)
1429
+ � − W(u2,0) = −W(u2,0) ,
1430
+ so that inequality (4.15) holds as soon as W(u2,0) is nonzero. Lemma 4.6 is proved.
1431
+ Remark. By contrast with the proof of the generic elementarity of standing pulses in
1432
+ [1], case 3 above cannot be easily precluded. Indeed, let us assume that, for every r in
1433
+ Ionce, ψ(r) is equal to α ˙u2(r) for some nonzero (constant) quantity α. Without loss of
1434
+ generality, we may assume that α is equal to 1. Then, it follows from the second equation
1435
+ of (4.16) that, still for every r in Ionce (omitting the dependency on r),
1436
+ φ = dsp − 1
1437
+ r
1438
+ ψ − ˙ψ = dsp − 1
1439
+ r
1440
+ ˙u2 − ¨u2 = 2(dsp − 1)
1441
+ r
1442
+ ˙u2 − ∇V2(u2) ,
1443
+ and it follows from the first equation of (4.16) that
1444
+ −D2V2(u2) ˙u2 = ˙φ = −2(dsp − 1)
1445
+ r2
1446
+ ˙u2 + 2(dsp − 1)
1447
+ r
1448
+ ¨u2 − D2V2(u2) ˙u2 ,
1449
+ and thus, after simplification,
1450
+ ¨u2 = 1
1451
+ r ˙u2 ,
1452
+ or equivalently
1453
+ ˙u2 = r
1454
+ dsp
1455
+ ∇V (u2) .
1456
+ As illustrated by equality (2.6), this last equality indeed holds if ∇V2 is constant on the
1457
+ set u2(Ionce). Case 3 can therefore not be a priori precluded, and if it may be argued
1458
+ that this case is “unlikely” (non generic), the direct argument provided above in this
1459
+ case is simpler. By contrast, in [1] for standing pulses in space dimension one (dsp equal
1460
+ to 1), this case could not occur because ψ was assumed to be nonzero on the symmetry
1461
+ subspace, defined here as {(v, r) = (0Rdst, 0)}, see (1.11).
1462
+ 4.3.5 Conclusion
1463
+ As seen in sub-subsection 4.3.3, hypothesis 1 of Theorem 4.2 of [1] is fulfilled for the
1464
+ function Φ defined in (4.3), and since Lemma 4.6 yields equality (4.7), hypothesis 2 of this
1465
+ 23
1466
+
1467
+ theorem is also fulfilled. The conclusion of this theorem ensures that there exists a generic
1468
+ subset Λgen, N of Λ such that, for every V in Λgen, N, the image of the function M → N,
1469
+ m �→ Φ(m, V ) is transverse to the diagonal W of N. According to Proposition 4.4, it
1470
+ follows that every u in SV, u∞(V ) such that u(N) is in BRdst(u1,∞, ε1) is transverse. The
1471
+ set
1472
+ Λgen =
1473
+
1474
+ N∈N, N≥r0
1475
+ Λgen, N
1476
+ is still a generic subset of Λ. For every V in Λgen and every u in SV, u∞(V ), since u(r)
1477
+ goes to u∞(V ) as r goes to +∞, there exists N such that u(N) is in BRdst(u1,∞, ε1), and
1478
+ according to the previous statements u is transverse. In other words, the conclusions of
1479
+ Proposition 4.2 hold with:
1480
+ νV1, u1,∞ = ν = Λ
1481
+ and
1482
+ νV1, u1,∞, gen = Λgen .
1483
+ 5 Proof of the main results
1484
+ Proposition 4.1 shows the genericity of the property considered in Theorem 1, but only
1485
+ inside the space Vquad-R of the potentials that are quadratic past some radius R. In this
1486
+ section, the arguments will be adapted to obtain the genericity of the same property
1487
+ in the space Vfull (that is Ck+1(Rdst, R)) of all potentials, endowed with the extended
1488
+ topology (see subsection 1.5). They are identical to those of section 9 of [1]. Let us recall
1489
+ the notation SV introduced in (1.4), and, for every positive quantity R, let us consider
1490
+ the set
1491
+ SV,R =
1492
+
1493
+ u ∈ SV :
1494
+ sup
1495
+ r∈[0,+∞)
1496
+ |u(r)| ≤ R
1497
+
1498
+ .
1499
+ Exactly as shown in subsection 9.1 of [1], Theorem 1 follows from the next proposition.
1500
+ Proposition 5.1. For every positive quantity R, there exists a generic subset Vfull-⋔-S-R
1501
+ of Vfull such that, for every potential V in this subset, every radially symmetric stationary
1502
+ solution stable at infinity in SV,R is transverse.
1503
+ Proof. Let R denote a positive quantity, let V1 denote a potential function in Vquad-(R+1),
1504
+ and let u1,∞ denote a nondegenerate minimum point of V1. Let us consider the neigh-
1505
+ bourhood νV1, u1,∞ of V1 in Vquad-(R+1) provided by Proposition 4.2 for these objects,
1506
+ together with the quantities ε1, c1, and r1 introduced in sub-subsection 4.3.1. Up to
1507
+ replacing νV1, u1,∞ by its interior, we may assume that it is open in Vquad-(R+1). As in
1508
+ sub-subsection 4.3.1, let us consider an integer N not smaller than r1, and the same
1509
+ function Φ : M × Λ → N as in (4.3).
1510
+ Here is the sole difference with the setting of sub-subsection 4.3.1: by contrast with the
1511
+ non-compact set M defining the departure set of Φ, let us consider the compact subset
1512
+ MN defined as:
1513
+ MN = BRdst(0Rdst, N) × BRdst(u1,∞, ε1) .
1514
+ Thus the integer N now serves two purposes: the “time” (radius) at which the intersection
1515
+ between unstable and centre-stable manifolds is considered, and the radius of the ball
1516
+ 24
1517
+
1518
+ containing the departure points of the unstable manifolds that are considered. These
1519
+ purposes are independent (two different integers instead of the single integer N may as
1520
+ well be introduced). Let us consider the set:
1521
+ OV1,u1,∞,N =
1522
+
1523
+ V ∈ νV1, u1,∞ : Φ(MN, V ) is transverse to W in N
1524
+
1525
+ .
1526
+ As shown in Proposition 4.4, this set OV1,u1,∞,N is made of the potential functions V in
1527
+ νV1, u1,∞ such that every u in SV, u∞(V ) such that u(N) is in BRdst(u1,∞, ε1) and u(0) is in
1528
+ BRdst(0Rdst, N), is transverse. This set contains the generic subset νV1, u1,∞, gen = Λgen of
1529
+ νV1, u1,∞ and is therefore generic (thus, in particular, dense) in νV1, u1,∞. By comparison
1530
+ with νV1, u1,∞, gen, the additional feature of this set OV1,u1,∞,N is that it is open: exactly
1531
+ as in the proof of Lemma 9.2 of [1], this openness follows from the intrinsic openness of a
1532
+ transversality property and the compactness of MN.
1533
+ Let us make the additional assumption that the potential V1 is a Morse function. Then,
1534
+ the set of minimum points of V1 is finite and depends smoothly on V in a neighbourhood
1535
+ νrobust(V1) of V1. Intersecting the sets νV1, u1,∞ and OV1,u1,∞,N above over all the minimum
1536
+ points u1,∞ of V1 provides an open neighbourhood νV1 of V1 and an open dense subset
1537
+ OV1,N of νV1 such that, for all V in νV1, every radially symmetric stationary solution
1538
+ stable close to a minimum point of V at infinity, and equal at origin to some point of
1539
+ BRdst(0Rdst, N), is transverse.
1540
+ Denoting by int(A) the interior of a set A and using the notation of subsection 4.4 of
1541
+ [1], let us introduce the sets
1542
+ ˜νV1 = res−1
1543
+ R,∞ ◦ resR,(R+1)(νV1) ,
1544
+ and
1545
+ ˜OV1,N = res−1
1546
+ R,∞ ◦ resR,(R+1)(OV1,N) ,
1547
+ and
1548
+ ˜Oext
1549
+ V1,N = ˜OV1,N ⊔ int
1550
+ �Vfull \ ˜νV1
1551
+ � .
1552
+ It follows from these definitions that ˜Oext
1553
+ V1,N is a dense open subset of Vfull (for more
1554
+ details, see Lemma 9.3 of [1]).
1555
+ Since Vquad-(R+1) is a separable space, it is second-countable, and can be covered by a
1556
+ countable number of sets of the form νV1. With symbols, there exists a countable family
1557
+ (V1,i)i∈N of potentials of Vquad-(R+1)-Morse so that
1558
+ Vquad-(R+1)-Morse =
1559
+
1560
+ i∈N
1561
+ νV1,i .
1562
+ Let us consider the set
1563
+ Vfull-⋔-S-R = Vfull-Morse ∩
1564
+
1565
+
1566
+
1567
+ (i,N)∈N2
1568
+ ˜Oext
1569
+ V1,i,N
1570
+
1571
+ � ,
1572
+ where Vfull-Morse is the set of potentials in Vfull which are Morse functions. This set is a
1573
+ countable intersection of dense open subsets of Vfull, and is therefore a generic subset of
1574
+ Vfull. And, for every potential V in this set Vfull-⋔-S-R, every radially symmetric stationary
1575
+ solution stable at infinity in SV,R is transverse (for more details, see Lemma 9.4 of [1]).
1576
+ Proposition 5.1 is proved.
1577
+ 25
1578
+
1579
+ As already mentioned at the beginning of this section, Theorem 1 follows from Proposi-
1580
+ tion 5.1. Finally, Corollary 1.1 follows from Theorem 1 (for more details, see subsection 9.4
1581
+ of [1]).
1582
+ Acknowledgements
1583
+ This paper owes a lot to numerous fruitful discussions with Romain
1584
+ Joly, about both its content and the content of the companion paper [1] written in
1585
+ collaboration with him.
1586
+ References
1587
+ [1]
1588
+ R. Joly and E. Risler. “Generic transversality of travelling fronts, standing fronts,
1589
+ and standing pulses for parabolic gradient systems”. In: arXiv (2023), pp. 1–69.
1590
+ arXiv: 2301.02095 (cit. on pp. 3, 5, 7–10, 14, 18, 20–26).
1591
+ [2]
1592
+ E. Risler. “Global behaviour of bistable solutions for gradient systems in one
1593
+ unbounded spatial dimension”. In: arXiv (2022), pp. 1–91. arXiv: 1604.02002
1594
+ (cit. on p. 3).
1595
+ [3]
1596
+ E. Risler. “Global behaviour of bistable solutions for hyperbolic gradient systems
1597
+ in one unbounded spatial dimension”. In: arXiv (2022), pp. 1–75. arXiv: 1703.01221
1598
+ (cit. on p. 3).
1599
+ [4]
1600
+ E. Risler. “Global behaviour of radially symmetric solutions stable at infinity for
1601
+ gradient systems”. In: arXiv (2022), pp. 1–52. arXiv: 1703.02134 (cit. on p. 3).
1602
+ [5]
1603
+ E. Risler. “Global relaxation of bistable solutions for gradient systems in one
1604
+ unbounded spatial dimension”. In: arXiv (2022), pp. 1–69. arXiv: 1604.00804
1605
+ (cit. on p. 3).
1606
+ Emmanuel Risler
1607
+ Université de Lyon, INSA de Lyon, CNRS UMR 5208, Institut Camille Jordan,
1608
+ F-69621 Villeurbanne, France.
1609
+ emmanuel.risler@insa-lyon.fr
1610
+ 26
1611
+
19E0T4oBgHgl3EQfugE5/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2NFAT4oBgHgl3EQfDRyP/content/tmp_files/2301.08415v1.pdf.txt ADDED
@@ -0,0 +1,1220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Low-energy quasi-circular electron correlations with charge order
2
+ wavelength in Bi2Sr2CaCu2O8+δ
3
+ K. Scott,1, 2 E. Kisiel,3 T. J. Boyle,1, 2, 4 R. Basak,3 G. Jargot,5 S. Das,3 S. Agrestini,6
4
+ M. Garcia-Fernandez,6 J. Choi,6 J. Pelliciari,7 J. Li,7 Y. D. Chuang,8 R. D. Zhong,9
5
+ J. A. Schneeloch,9 G. D. Gu,9 F. L´egar´e,5 A. F. Kemper,10 Ke-Jin Zhou,6 V. Bisogni,7
6
+ S. Blanco-Canosa,11, 12 A. Frano,3, 13 F. Boschini,5, 14 and E. H. da Silva Neto1, 2, ∗
7
+ 1Department of Physics, Yale University, New Haven, Connecticut 06520, USA
8
+ 2Energy Sciences Institute, Yale University, West Haven, Connecticut 06516, USA
9
+ 3Department of Physics, University of California San Diego, La Jolla, California 92093, USA
10
+ 4Department of Physics and Astronomy, University of California, Davis, California 95616, USA
11
+ 5Centre ´Energie Mat´eriaux T´el´ecommunications,
12
+ Institut National de la Recherche Scientifique, Varennes, Qu´ebec J3X 1S2, Canada
13
+ 6Diamond Light Source, Harwell Campus, Didcot OX11 0DE, United Kingdom
14
+ 7National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, NY 11973, USA
15
+ 8Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
16
+ 9Condensed Matter Physics and Materials Science, Brookhaven National Laboratory, Upton, NY, USA
17
+ 10Department of Physics, North Carolina State University, Raleigh, NC 27695, U.S.A.
18
+ 11Donostia International Physics Center, DIPC, 20018 Donostia-San Sebastian, Basque Country, Spain
19
+ 12IKERBASQUE, Basque Foundation for Science, 48013 Bilbao, Spain
20
+ 13Canadian Institute for Advanced Research, Toronto, ON, M5G 1M1, Canada
21
+ 14Quantum Matter Institute, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
22
+ ∗ Corresponding Author: eduardo.dasilvaneto@yale.edu
23
+ arXiv:2301.08415v1 [cond-mat.str-el] 20 Jan 2023
24
+
25
+ 2
26
+ ABSTRACT
27
+ In the study of dynamic charge order correlations in the cuprates, most high energy-
28
+ resolution resonant inelastic x-ray scattering (RIXS) measurements have focused on mo-
29
+ menta along the high-symmetry directions of the copper oxide plane. However, electron
30
+ scattering along other in-plane directions should not be neglected as they may contain in-
31
+ formation relevant, for example, to the origin of charge order correlations or to our un-
32
+ derstanding of the isotropic scattering responsible for strange metal behavior in cuprates.
33
+ We report high-resolution resonant inelastic x-ray scattering (RIXS) experiments that re-
34
+ veal the presence of dynamic electron correlations over the qx-qy scattering plane in under-
35
+ doped Bi2Sr2CaCu2O8+δ with Tc = 54 K. We use the softening of the RIXS-measured bond
36
+ stretching phonon line as a marker for the presence of charge-order-related dynamic electron
37
+ correlations. The experiments show that these dynamic correlations exist at energies below
38
+ approximately 70 meV and are centered around a quasi-circular manifold in the qx-qy scat-
39
+ tering plane with radius equal to the magnitude of the charge order wave vector, qCO. We
40
+ also demonstrate how this phonon-tracking procedure provides the necessary experimental
41
+ precision to rule out fluctuations of short-range directional charge order (i.e. centered around
42
+ [qx = ±qCO, qy = 0] and [qx = 0, qy = ±qCO]) as the origin of the observed correlations.
43
+ INTRODUCTION
44
+ Dynamic fluctuations from periodic charge order (CO) pervade the phase diagram of
45
+ cuprate superconductors, perhaps even more than superconductivity itself [1]. The detec-
46
+ tion of these fluctuations over energy and momentum was enabled by several recent advances
47
+ in the energy resolution of resonant inelastic x-ray scattering (RIXS) instruments operat-
48
+ ing in the soft x-ray regime. In the case of YBa2Cu3O6+δ, Cu-L3 RIXS detects dynamic
49
+ correlations at the charge order wavevector, qCO, with a characteristic energy scale of ap-
50
+ proximately 20 meV [2]. It has been proposed that these low-energy short-range dynamic
51
+ charge order correlations are a key ingredient to the strange metal behavior [3, 4] charac-
52
+ terized by linear-in-temperature resistivity [5, 6]. On one hand, this temperature behav-
53
+ ior is often associated with an isotropic scattering rate that depends only on temperature
54
+ in units of energy and Planck’s constant (i.e. ∝ kBT/ℏ, sometimes called the Planckian
55
+ regime) [7–11], as supported by recent angle-dependent magnetoresistance measurements of
56
+ La1.6−xNd0.4SrxCuO4 [12]. On the other hand, combined transport and RIXS studies have
57
+
58
+ 3
59
+ recently shown an unexpected link between linear-in-temperature resistivity and charge or-
60
+ der in YBa2Cu3O6+δ [13, 14]. Combined, these latest results suggest that fluctuations of the
61
+ charge order should somehow result in an effective isotropic scattering. Still, high-resolution
62
+ RIXS experiments have largely focused on the fluctuations along the high-symmetry crystal-
63
+ lographic directions only, leaving the full structure of electron correlations within the copper
64
+ oxide plane unknown.
65
+ Recently, in Bi2Sr2CaCu2O8+δ (Bi-2212), RIXS measurements found the existence of a
66
+ quasi-circular pattern in the qx-qy plane at finite energies and with the same wave vector
67
+ magnitude as that of the observed static charge order peak at q = [qx = ±qCO, qy =
68
+ 0] and [qx = 0, qy = ±qCO] – i.e.
69
+ dynamic correlations with charge order wavelength
70
+ along all direction in the CuO2 plane [15]. Although the medium energy-resolution of those
71
+ measurements (∆E ≈ 0.8 eV) precluded a more precise determination of their energy profile,
72
+ the results suggested that these quasi-circular dynamic correlations (QCDCs) appear broad
73
+ over the mid-infrared ranges (defined approximately as 100 to 900 meV). This scattering
74
+ manifold, which may result from combined short- and long-range Coulomb interactions [15–
75
+ 17], would provide a large variety of wave vectors for connecting all points of the Fermi
76
+ surface (i.e. an effective isotropic scattering). However, it is not yet experimentally known
77
+ if this manifold extends to electron scattering at lower energies, in the quasi-elastic regime.
78
+ To experimentally investigate this scenario we used high energy-resolution (≈ 37 meV) Cu-L3
79
+ RIXS qx-qy mapping of the electronic correlations in Bi-2212. Using the softening of the bond
80
+ stretching (BS) phonon in RIXS as a marker of charge order correlations, our measurements
81
+ reveal the presence of low-energy quasi-circular dynamic electronic correlations with |q| ≈
82
+ qCO.
83
+ RESULTS
84
+ High-resolution RIXS mapping of dynamic correlations in the qx-qy plane
85
+ We performed measurements at φ = 0◦, 25◦, 30◦, 35◦, 45◦, where φ is defined as the az-
86
+ imuthal angle from the qx axis. For each φ, we acquired RIXS spectra at different values of
87
+ in-plane momentum-transfer q = |q| by varying the incident angle on the sample. Through-
88
+ out the paper, values of q are reported in reciprocal lattice units (r.l.u.), where one r.l.u. is
89
+ defined as 2π/a and a = 3.82 ˚A (the lattice constant along φ = 0◦). In Fig. 1 (A and B), we
90
+
91
+ 4
92
+ show representative spectra obtained at q near qCO for φ = 0◦ and 30◦, and energies below
93
+ 1.1 eV. In these two cases, the minimal model that fits the data includes five contributions:
94
+ a quasi-elastic peak, a bond-stretching phonon peak at ≈ 70 meV, a peak at ≈ 135 meV
95
+ (likely from a two-phonon process), a broad paramagnon and a broad background feature
96
+ of unknown origin. A similar assessment can be made regarding all other high-resolution
97
+ spectra acquired in this work. In this type of fitting analysis, the QCDCs are not explicitly
98
+ accounted for and it is generally difficult to disentangle overlapping contributions to the
99
+ RIXS spectra using a fitting model with so many parameters, thus precluding the extraction
100
+ of the exact spectral profile of the QCDCs with any reasonable confidence. Still, we note
101
+ that this high-resolution data is consistent with the previously reported medium-resolution
102
+ data [15], which can be verified by integration of the high-resolution data (see supplementary
103
+ materials, Fig. S7).
104
+ It is likely that the spectral intensity of QCDCs in Bi-2212 is so dilute over energy
105
+ as to preclude the extraction of their spectral structure amidst stronger paramagnon and
106
+ phonon signals. Still, here we develop a different method to detect QCDCs at lower energies,
107
+ by tracking the evolution of the bond-stretching (BS) phonon over the qx-qy plane. This
108
+ method is based on the phenomenology revealed by several recent RIXS measurements of
109
+ the cuprates along qx and qy, which indicate an apparent softening of the BS phonon peak
110
+ at the momentum location of the static CO peak [18–23]. In the case of Bi-2212, it has
111
+ been proposed that the apparent softening of the BS phonon in RIXS is due to an interplay
112
+ between low-energy fluctuations of the charge order and BS phonons that results in a Fano-
113
+ like interference [18, 21, 23, 24]. Another possibility is that the apparent softening is simply
114
+ the result of the phonon peak and a low-energy charge order peak overlapping, as recently
115
+ suggested by measurements of both YBa2Cu3O6+δ and Bi-2212 [25]. In either interpretation
116
+ the location of the phonon softening can be used as a marker for low-energy charge order
117
+ correlations.
118
+ Figure 1 (C and D) shows the spectra acquired as a function of q for φ = 0◦ and 30◦,
119
+ respectively, focusing on the region of the BS phonon.
120
+ At φ = 0◦, it is clear that the
121
+ phonon peak position softens to its lowest energy value at q = qCO ≈ 0.29 r.l.u. (Fig. 1C).
122
+ Careful observation of the spectra taken along φ = 30◦ shows a similar softening effect with
123
+ the lowest phonon energy position occuring for q ≈ qCO (Fig. 1D). Figure 2A shows the
124
+ mapping of the BS phonon mode at φ = 0◦ and 30◦ obtained after subtraction of the fitted
125
+
126
+ 5
127
+ 3
128
+ 2
129
+ 1
130
+ 0
131
+ 1.0
132
+ 0.8
133
+ 0.6
134
+ 0.4
135
+ 0.2
136
+ 0.0
137
+ ϕ=0°
138
+ q=0.27 r.l.u.
139
+ 1
140
+ 0
141
+ 1.0
142
+ 0.8
143
+ 0.6
144
+ 0.4
145
+ 0.2
146
+ 0.0
147
+ ϕ=30°
148
+ q=0.27 r.l.u.
149
+ IRIXS (arb. units)
150
+ IRIXS (arb. units)
151
+ Intensity (arb. units)
152
+ Energy Loss (meV)
153
+ A
154
+ B
155
+ C
156
+ Energy Loss (eV)
157
+ 10
158
+ 8
159
+ 6
160
+ 4
161
+ 2
162
+ 120
163
+ 80
164
+ 40
165
+ Energy Loss (eV)
166
+ ϕ=0°
167
+ D
168
+ ϕ=30°
169
+ 120
170
+ 80
171
+ 40
172
+ q (r.l.u.)
173
+ 0.1
174
+ 0.12
175
+ 0.14
176
+ 0.16
177
+ 0.17
178
+ 0.18
179
+ 0.19
180
+ 0.2
181
+ 0.21
182
+ 0.22
183
+ 0.23
184
+ 0.24
185
+ 0.25
186
+ 0.26
187
+ 0.27
188
+ 0.28
189
+ 0.29
190
+ 0.30
191
+ 0.31
192
+ 0.32
193
+ 0.33
194
+ 0.34
195
+ 0.35
196
+ 0.36
197
+ 0.37
198
+ 0.39
199
+ 0.41
200
+ 0.43
201
+ 0.45
202
+ 0.47
203
+ FIG. 1. RIXS spectra and fitting. (A and B) Examples of spectra at q = 0.27 r.l.u. for φ = 0◦
204
+ and 30◦, respectively (open circles). The red lines are fits to the spectra, composed of a quasi-
205
+ elastic peak (pink), a BS phonon peak at ≈ 70 meV (blue), a peak at ≈ 135 meV (likely from
206
+ a two-phonon process) (purple), a broad paramagnon (orange) and a broad background feature
207
+ of unknown origin (brown). (C and D) RIXS measured BS phonon peak for various values of q
208
+ measured for φ = 0◦ and 30◦, respectively (black circles). The blue lines are the fits to the spectra.
209
+ The vertical orange dashed lines, indicating the lowest phonon peak position at each φ, are shown
210
+ to help the reader observe the phonon dispersions in the raw data.
211
+ elastic line, once again showing the softening of the RIXS phonon even at φ = 30◦. To
212
+ precisely determine the locations of the softening in the qx-qy plane, we fit the spectra to
213
+ extract the dispersion of the BS phonon for each φ (Fig. 2B). We observe a softening of the
214
+ RIXS measured phonon line for all φ, except for φ = 45◦. Remarkably all observed softening
215
+ occurs at a value of q ≈ qCO, precisely as expected for QCDCs at low energies.
216
+ Discriminating QCDCs from short-range directional order
217
+ Dynamic correlations emanating from short range order are bound to be broad in q. It
218
+ is therefore reasonable to ask whether the measured qx-qy profile of the BS phonon could
219
+
220
+ 6
221
+ 0.4
222
+ 0.3
223
+ 0.2
224
+ 0.1
225
+ 100
226
+ 60
227
+ 20
228
+ 100
229
+ 60
230
+ 20
231
+ 70
232
+ 60
233
+ 50
234
+ 0.4
235
+ 0.3
236
+ 0.2
237
+ 0.1
238
+ ϕ:
239
+ q (r.l.u.)
240
+ q (r.l.u.)
241
+ Energy Loss (meV)
242
+ Phonon Energy (meV)
243
+ 1
244
+ 2
245
+ 1
246
+ 2
247
+ 3
248
+ A
249
+ B
250
+ ϕ=0°
251
+
252
+ 25°
253
+ 30°
254
+ 35°
255
+ 45°
256
+ ϕ=30°
257
+ FIG. 2. Location of low-energy dynamic correlations extracted from the phonon dis-
258
+ persion. (A) Energy-momentum structure of the excitations at φ = 0◦ and 30◦ after subtraction
259
+ of the elastic line. The image is constructed from RIXS spectra deconvoluted from the energy res-
260
+ olution. (B) Location of the phonon peak obtained by fitting the RIXS spectra deconvoluted from
261
+ energy resolution for different φ (see Materials and Methods and also Supplementary Materials,
262
+ Fig. S3). The solid lines are obtained by fitting the q-dependence of the phonon peak (circles) with
263
+ a negative Lorentzian function plus a linear background. The shaded regions around the solid lines
264
+ are generated from the 95% confidence interval obtained for the various fits to the spectra (see
265
+ Materials and Methods for details). The solid lines for φ = 0◦ and 30◦ in (B) appear as dashed
266
+ white lines in (A).
267
+ simply be the result of diffuse scattering from short-range directional order. The fundamental
268
+ difference between QCDCs and short-range directional order is that the former forms a
269
+ manifold of dynamic correlations centered at q = qCO (similar to Brazovskii-type fluctuations
270
+ [15, 26]), while the the latter results in dynamic correlations around q = [qx = ±qCO, qy = 0]
271
+ and q = [qx = 0, qy = ±qCO] (more details on M1 and M2 are provided in the Materials and
272
+ Methods section). To contrast these scenarios we consider two simple toy models. In both
273
+ cases we start with a flat |q|-independent phonon mode at 72 meV, which is a reasonable
274
+ approximation given the small dispersion of the BS phonon in the absence of charge order
275
+ [25, 27]. In the first model (M1) we construct the QCDCs scenario, where the q-cuts for
276
+
277
+ 7
278
+ -0.5
279
+ -0.25
280
+ 0
281
+ 0.25
282
+ 0.5
283
+ -0.5
284
+ -0.25
285
+ 0
286
+ 0.25
287
+ 0.5
288
+ qx (r.l.u.)
289
+ qy (r.l.u.)
290
+ -0.5
291
+ -0.25
292
+ 0
293
+ 0.25
294
+ 0.5
295
+ 40
296
+ 45
297
+ 50
298
+ 55
299
+ 60
300
+ 65
301
+ 70
302
+ Energy (meV)
303
+ qx (r.l.u.)
304
+ 0.1
305
+ 0.2
306
+ 0.3
307
+ 0.4
308
+ 0.5
309
+ 40
310
+ 45
311
+ 50
312
+ 55
313
+ 60
314
+ 65
315
+ 70
316
+ 75
317
+ Energy (meV)
318
+ 0.1
319
+ 0.2
320
+ 0.3
321
+ 0.4
322
+ 0.5
323
+ 0.1
324
+ 0.25
325
+ 0.4
326
+
327
+ 90°
328
+ 180°
329
+ 270°
330
+ φ
331
+ q (r.l.u.)
332
+ A
333
+ B
334
+ C
335
+ D
336
+ E
337
+ T=Tc
338
+ T<Tc
339
+ M1
340
+ M2
341
+ M1
342
+ M2
343
+ qx (r.l.u.)
344
+ qx (r.l.u.)
345
+ 45°
346
+ 40°
347
+ 35°
348
+ 30°
349
+ 25°
350
+ 15°
351
+
352
+ φ:
353
+ FIG. 3. Models of phonon softening for QCDCs and directional order. (A and C) Phonon
354
+ dispersion for M1 and M2, as described in the text. (B and D) Momentum q cuts of the phonon
355
+ dispersion at different φ for the simulated data in (A) and (C) respectively. The dashed orange
356
+ and green lines in (A-D) identify the location of the phonon softening in the qx-qy plane. (E) Polar
357
+ plot contrasting M1, M2 models (orange and green solid lines) and the experimental data (red
358
+ symbols). The error bars in (E) are obtained from the fits to the phonon dispersion in Fig. 2B. See
359
+ Materials and Methods for more details.
360
+ various φ always have a minimum located at q = qCO, Fig. 3B. In the second model (M2) we
361
+ consider the case where dynamic charge order correlations emerge isotropically from static
362
+ peaks at [qx = ±qCO, qy = 0] and [qx = 0, qy = ±qCO]. The corresponding phonon profile is
363
+ shown in Fig. 3 (C and D). To roughly emulate the data we also introduce a φ-dependent
364
+ phonon minimum in M1, which increases from φ = 0◦ to 45◦, Fig. 3A. However, note that the
365
+ magnitude of the softening depends on the φ structure of the electron-phonon coupling, which
366
+ is not known or necessary for discerning the two scenarios. The q-cuts show a qualitatively
367
+ similar behavior in both models: a clear phonon softening at φ = 0 that continues to
368
+ exist even as φ approaches 45◦. However, in M2 the q location of the phonon minima clearly
369
+ decreases with increasing φ from 0◦ to 45◦. This comparison explains our selection of φ values
370
+ for these studies: the experimental ability to differentiate between M1 and M2 is largest in
371
+ the φ = 25◦ to 45◦ range. The polar plot in Fig. 3E summarizes the analysis, comparing the
372
+
373
+ 8
374
+ q-location of the minima for both models to the minima obtained from experiments (red
375
+ markers). Within the error bars, the RIXS measurements are consistent with M1 and rule
376
+ out M2, indicating the quasi-circular nature of the low-energy correlations associated with
377
+ the charge order.
378
+ DISCUSSION
379
+ The experiments presented here provide evidence for the existence of quasi-circular dy-
380
+ namic correlations at low energies in underdoped Bi-2212, which could be a key ingredient
381
+ for models that connect charge order to an effective isotropic scattering. Long-range trans-
382
+ lational symmetry breaking cannot be responsible for this isotropy due to the characteristic
383
+ length scale and directionality of the ordered state. Although short-range electron correla-
384
+ tions from directional order, occupying a much larger region of momentum space, could in
385
+ principle emulate isotropic scattering [3, 4, 28], the QCDCs revealed by our experiments of-
386
+ fer a different scenario. Extending not only around the static charge order wave vectors but
387
+ also in the azimuthal direction, QCDCs might be a more viable platform for isotropic scat-
388
+ tering. To fully understand the impact of QCDCs to electronic properties of the cuprates,
389
+ one requires knowledge of the energy structure of these correlations. Although this might
390
+ still be beyond the current experimental capabilities, our experiments provide some con-
391
+ straints to the low-energy structure of the QCDCs. In particular, for the φ values where
392
+ a softening is detected, QCDCs must exist below ≈ 70 meV (i.e. the approximate energy
393
+ of the bare phonon). Unfortunately the amount of energy softening at q = qCO by itself,
394
+ without knowledge of the φ-dependence of the electron-phonon interaction, does not provide
395
+ more information about the energy structure of the QCDCs. Therefore, it remains possible
396
+ that QCDCs at φ = 45◦ exist below 70 meV but do not significantly interact with the BS
397
+ phonon.
398
+ The quasi-circular shape of the low-energy correlations is similar to the shape obtained
399
+ from the analysis of higher energy correlations (Ref. [15] and Supplementary Materials,
400
+ Fig. S6). This similarity raises the possibility that the QCDCs exist up to much higher
401
+ energies of order of 1 eV. As discussed in Ref. [15], the quasi-circular correlations cannot be
402
+ explained by an instability of the Fermi surface. Instead, it was proposed that the loca-
403
+ tion of the dynamic CO correlations in q-space is determined by the minima of the effective
404
+ Coulomb interaction, which becomes non-monotonic in q due to the inclusion of a long-range
405
+
406
+ 9
407
+ Coulomb interaction. However, this non-monotonic Coulomb interaction by itself failed to
408
+ capture the intensity anisotropy observed at q ≈ qCO. Likewise, here the same proposed
409
+ Coulomb interaction could also explain the most salient feature of our data, namely the
410
+ quasi-circular shape of the low-energy correlations. Recently, a more complete theoretical
411
+ description based on a t-J model with long-range Coulomb interaction shows the presence
412
+ of ring-like charge correlations with the correct intensity anisotropy [17]. The results pre-
413
+ sented here can serve as a guide for future theoretical investigations that also account for
414
+ the apparent decrease of the phonon softening from φ = 0◦ to 45◦.
415
+ Beyond the fact that both the energy-integrated correlations [15] and the low-energy
416
+ dynamic correlations appear to occupy the same quasi-circular scattering manifold, the cur-
417
+ rent RIXS measurements do not provide further experimental evidence to connect these
418
+ two phenomena. Such additional evidence may come from polarimetric RIXS experiments
419
+ that are able to decompose charge and spin excitations in the mid-infrared range, as it
420
+ has been done for electron-doped cuprates [29]. Compared to the energy-integration pro-
421
+ cedure, the phonon tracking method provides larger precision for mapping CO correlations
422
+ in the qx-qy plane, since the large integration ranges required for the former result in very
423
+ broad features in q-space.
424
+ Indeed, we have already performed medium-resolution RIXS
425
+ measurements that detect the presence of similar quasi-circular scattering manifolds in the
426
+ energy-integrated spectrum of optimally and overdoped samples, but the investigation of
427
+ their doping dependence is hindered by the large experimental uncertainty associated with
428
+ the integration method (see Supplementary Materials, Fig. S6). Instead, our new procedure
429
+ to track QCDCs using measurements of the RIXS BS phonon goes beyond demonstrating
430
+ the existence of QCDCs in underdoped Bi-2212 at low energies. It is also a new methodology
431
+ that can be used to detect QCDCs in other cuprates and understand related phenomena
432
+ such as the electron-doped cuprates which also show a quasi-circular scattering [30]. Finally,
433
+ the application of this new method to multiple cuprate families at different dopings and/or
434
+ temperatures will help unveil whether and how QCDCs and the strange metal are related.
435
+ MATERIALS AND METHODS
436
+ RIXS experiments
437
+ High-resolution RIXS experiments were performed at the I21 beamline [31] at Diamond
438
+
439
+ 10
440
+ Light Source, United Kingdom, and at the 2-ID beamline at the National Synchrotron Light
441
+ Source II, Brookhaven National Laboratory, USA. The orientation of the crystal axes of
442
+ the underdoped Bi2Sr2CaCu2O8+δ samples with Tc = 54 K was obtained by x-ray diffraction
443
+ prior to the RIXS experiment. The samples were cleaved in air just moments before inserting
444
+ them into the ultra-high-vacuum chambers. For experiments at I21, the crystal was aligned
445
+ to the scattering geometry in situ from measurements of the 002 Bragg reflection and the
446
+ b-axis superstructure peak. The scattering angle was fixed at 154◦ (I21) and 153◦ (2-ID).
447
+ The incoming light was set to vertical polarization (σ geometry) at the Cu-L3 edge (≈
448
+ 931.5 eV). The combined energy resolution (FWHM) was about 37 meV (I21) and 40 meV
449
+ (2-ID), with small variations (±3 meV) over the course of multiple days. In both cases the
450
+ energy resolution was relaxed in a trade-off for intensity. The projection of the momentum
451
+ transfer, q, qx-qy plane was obtained by varying the incident angle on the sample (θ). All
452
+ the measurements were performed at T = 54 K, which is the superconducting transition
453
+ temperature for this sample, except for one measurement performed at T = 25 K below Tc
454
+ (Fig. 3)E and one measurement at 300 K (Supplementary Materials, Figs. S2 and S5).
455
+ Analysis of RIXS spectra
456
+ To ensure the robustness of the extraction of phonon dispersion from the RIXS spectra we
457
+ analyzed the data using multiple methods. Although the overall RIXS cross-section may
458
+ depend on φ, we did not perform any normalization or intensity correction procedure to the
459
+ spectra since the energy location of the phonon does not depend on the overall intensity.
460
+ A comparison between the results for different methods is available in the supplementary
461
+ materials, Fig. S4.
462
+ Method 1: In an effort to maintain an agnostic approach and to not assume particular
463
+ functional forms of the different contributions to the RIXS spectra, we extracted the dis-
464
+ persion by simply tracking the energy positions of the phonon peak maximum in the RIXS
465
+ spectra deconvoluted from the energy resolution. See below for details of the deconvolution
466
+ procedure.
467
+ Method 2: The phonon dispersion shown in Fig. 2B was extracted by fitting the deconvo-
468
+ luted RIXS spectra (see Fig. 2A and supplementary materials, Fig. S3) in the [-30,130] meV
469
+ range to a double Gaussian function plus a second order polynomial background, keeping
470
+ all parameters free. The shaded regions around the solid lines in Fig. 2B were generated by
471
+ fitting the 95% confidence intervals (obtained from the fits) to a polynomial function of q.
472
+
473
+ 11
474
+ Method 3: Following previous works [23, 32], the raw RIXS spectra were fit to a five
475
+ component model that includes a Gaussian (elastic peak of amplitude Ael, position ωel and
476
+ width wel), two anti-Lorentzians (phonon and double-phonon peaks of different amplitude
477
+ Ai and position ωi, and sharing width wph and Fano parameter width qF – note that i=1,2
478
+ indicates the first and second phonon, respectively), a damped harmonic oscillator lineshape
479
+ (paramagnon of amplitude Apm, position ωpm and damping parameter γpm), and an error
480
+ function (smooth background described by an error function with amplitude amplitude ABG,
481
+ position ωBG and width wBG):
482
+ f(ω) = Aele
483
+ − (ω−ωel)2
484
+ w2
485
+ el
486
+ +
487
+
488
+ i=1,2
489
+ Ai
490
+ 2(ω − ωi)/wph + qF
491
+ [2(ω − ωi)/wph]2 + 1+
492
+ +Apm
493
+ γpmω
494
+ (ω2 − ω2
495
+ pm)2 + 4γ2
496
+ pmω2 + ABG
497
+
498
+ erf
499
+ �ω − ωBG
500
+ wBG
501
+
502
+ + 1
503
+
504
+ (1)
505
+ The fitting model is convoluted with the RIXS energy resolution (∼37 meV). From this
506
+ analysis we extracted the phonon dispersion for each φ that quantitatively matches the
507
+ phonon dispersion shown in Fig. 2B. All parameters are kept free, except for ωbg, which is
508
+ constrained within a range of [0.2, 0.6] eV.
509
+ Fitting the phonon dispersion
510
+ To obtain a phenomenological form to the dispersions in Fig. 2B, the extracted peak locations
511
+ as a function of q were fit to a linear background plus a negative Lorentzian function. From
512
+ this fit we obtain the q location of the softening (red markers in Fig. 3E). To obtain the
513
+ error bars in Fig. 3E, we follow a conservative approach by taking the average of the two
514
+ q-intercepts of the fitted curve at Emin + 2 meV, where Emin is the lowest energy of the
515
+ dispersion and ±2 meV is the typical amount of scatter observed in the data. For φ = 45◦
516
+ the data is fit to a line.
517
+ Deconvolution procedure
518
+ We employed the Lucy-Richardson deconvolution procedure [33] to deconvolve the energy
519
+ resolution (∼37 meV) from the RIXS spectra (deconvoluted curves for all azimuths are
520
+ displayed in the supplementary materials, Fig. S3). The number of iterations and region
521
+ of interest of the deconvolution procedure were optimized by ensuring that the convolution
522
+ of the deconvoluted curves reproduced the raw data.
523
+ Model simulations
524
+ The toy models M1 and M2 are purely phenomenological. For M1, the phonon dispersion
525
+
526
+ 12
527
+ was modeled as:
528
+ E = E0 − ξ(φ)
529
+
530
+ ( q−q0
531
+ Γ )2 + 1
532
+ (2)
533
+ where E0 = 72 meV, ∆ = 30 meV, Γ = 0.065 r.l.u., q0 = 0.29 r.l.u. and ξ(φ) = (| cos(2φ)| +
534
+ 0.08)/(1.08). For M2, the phonon dispersion was modeled as:
535
+ E = E0 −
536
+ 4
537
+
538
+ i=1
539
+
540
+ ( q−qi
541
+ Γ )2 + 1
542
+ (3)
543
+ where E0 = 74 meV, ∆ = 30 meV, Γ = 0.065 r.l.u. and qi are the four peaks located at
544
+ [qx = ±qCO, qy = 0] and [qx = 0, qy = ±qCO] (qCO = 0.29 r.l.u.)
545
+ Acknowledgments
546
+ We acknowledge the Diamond Light Source for time on beamline I21-RIXS under propos-
547
+ als MM28523 and MM30146. This research used resources of the Advanced Light Source,
548
+ a DOE Office of Science User Facility under contract no. DE-AC02-05CH11231. This re-
549
+ search used beamline 2-ID of the National Synchrotron Light Source II, a U.S. Department
550
+ of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by
551
+ Brookhaven National Laboratory under Contract No. DE-SC0012704. We especially ac-
552
+ knowledge the incredible work done by the beamline staffs at I-21, 2-ID and at 8.0.1 qRIXS,
553
+ to allow many of these experiments to be performed remotely during the COVID pandemic.
554
+ This work was supported by the Alfred P. Sloan Fellowship (E.H.d.S.N.). E.H.d.S.N ac-
555
+ knowledges support by the National Science Foundation under Grant No. 2034345. A.F.K.
556
+ was supported by the National Science Foundation under grant no. DMR-1752713. F.B.
557
+ acknowledges support from the Fonds de recherche du Qu´ebec – Nature et technologies
558
+ (FRQNT) and the Natural Sciences and Engineering Research Council of Canada (NSERC).
559
+ A.F. was supported by the Research Corporation for Science Advancement via the Cottrell
560
+ Scholar Award (27551) and the CIFAR Azrieli Global Scholars program. This material is
561
+ based upon work supported by the National Science Foundation under Grant No. DMR-
562
+ 2145080. The synthesis work at Brookhaven National Laboratory was supported by the US
563
+ Department of Energy, office of Basic Energy Sciences, contract no. DOE-SC0012704.
564
+
565
+ 13
566
+ Supplementary Materials
567
+ 12
568
+ 10
569
+ 8
570
+ 6
571
+ 4
572
+ 2
573
+ 0
574
+ 0.8
575
+ 0.6
576
+ 0.4
577
+ 0.2
578
+ 0.0
579
+ 10
580
+ 8
581
+ 6
582
+ 4
583
+ 2
584
+ 0
585
+ 0.8
586
+ 0.6
587
+ 0.4
588
+ 0.2
589
+ 0.0
590
+ Intensity (arb. units)
591
+ Energy Loss (eV)
592
+ ϕ=0°
593
+ ϕ=30°
594
+ FIG. S1. Fits to the spectra in Fig. 2 The two panels show the fit of the RIXS spectra over a
595
+ wide energy range using Method (3) as detailed in the Materials and Methods Section of the main
596
+ text.
597
+
598
+ 14
599
+ 1.6
600
+ 1.4
601
+ 1.2
602
+ 1.0
603
+ 0.8
604
+ 0.6
605
+ 0.4
606
+ 0.2
607
+ 120
608
+ 80
609
+ 40
610
+ 1.6
611
+ 1.4
612
+ 1.2
613
+ 1.0
614
+ 0.8
615
+ 0.6
616
+ 0.4
617
+ 0.2
618
+ Intensity (arb. units)
619
+ Energy Loss (eV)
620
+ 120
621
+ 80
622
+ 40
623
+ ϕ=0°
624
+ ϕ=30°
625
+ 70
626
+ 60
627
+ 50
628
+ 40
629
+ 0.4
630
+ 0.3
631
+ 0.2
632
+ 54 K
633
+ 300 K
634
+ 80
635
+ q (r.l.u.)
636
+ Phonon Dispersion (meV)
637
+ ϕ=30°
638
+ ϕ=0°
639
+ ϕ=45°
640
+ A
641
+ B
642
+ FIG. S2. Data obtained at the 2-ID beamline at NSLS-II. (A) RIXS spectra measured as a
643
+ function of q for different φ at 54 K. The dashed line is a guide to the eye highlighting the phonon
644
+ softening, visible at both φ = 0◦ and φ = 30◦ already from the raw data without need of doing any
645
+ deconvolution. (B) Phonon dispersion obtained by using Method (2) as detailed in the Materials
646
+ and Methods Section of the main text.
647
+
648
+ 15
649
+ 10
650
+ 8
651
+ 6
652
+ 4
653
+ 2
654
+ 0
655
+ 0.12
656
+ 0.08
657
+ 0.04
658
+ 0.00
659
+ 10
660
+ 8
661
+ 6
662
+ 4
663
+ 2
664
+ 0
665
+ 0.12
666
+ 0.08
667
+ 0.04
668
+ 0.00
669
+ 5
670
+ 4
671
+ 3
672
+ 2
673
+ 1
674
+ 0
675
+ 0.12
676
+ 0.08
677
+ 0.04
678
+ 0.00
679
+ 16
680
+ 14
681
+ 12
682
+ 10
683
+ 8
684
+ 6
685
+ 4
686
+ 2
687
+ 0
688
+ 0.12
689
+ 0.08
690
+ 0.04
691
+ 0.00
692
+ 10
693
+ 8
694
+ 6
695
+ 4
696
+ 2
697
+ 0
698
+ 0.12
699
+ 0.08
700
+ 0.04
701
+ 0.00
702
+ Intensity (arb. units)
703
+ Intensity (arb. units)
704
+ Energy Loss (eV)
705
+ ϕ=0°
706
+ ϕ=45°
707
+ ϕ=35°
708
+ ϕ=30°
709
+ ϕ=25°
710
+ FIG. S3.
711
+ Waterfall plot of the RIXS spectra deconvoluted for energy resolution
712
+ (37 meV) for different φ.
713
+ The softening of the BS phonon is clearly visible for any φ (ex-
714
+ cept φ=45o) by simple visual inspection of the deconvoluted curves (red dash lines are guides to
715
+ the eye).
716
+
717
+ 16
718
+ ϕ:
719
+ q (r.l.u.)
720
+ Phonon Dispersion (meV)
721
+ 75
722
+ 70
723
+ 65
724
+ 60
725
+ 55
726
+ 50
727
+ 45
728
+ 40
729
+ 0.4
730
+ 0.3
731
+ 0.2
732
+ 0.1
733
+
734
+ 25°
735
+ 30°
736
+ 35°
737
+ 45°
738
+
739
+ Fits:
740
+ (I):
741
+
742
+ (II):
743
+ (III):
744
+ 70
745
+ 60
746
+ 50
747
+ 40
748
+ 75
749
+ 70
750
+ 65
751
+ 60
752
+ 55
753
+ 76
754
+ 72
755
+ 68
756
+ 64
757
+ 60
758
+ 76
759
+ 72
760
+ 68
761
+ 64
762
+ 76
763
+ 72
764
+ 68
765
+ 64
766
+ 0.4
767
+ 0.3
768
+ 0.2
769
+ 0.1
770
+ ϕ = 0°
771
+ ϕ = 25°
772
+ ϕ = 30°
773
+ ϕ = 35°
774
+ ϕ = 45°
775
+ q (r.l.u.)
776
+ Phonon Dispersion (meV)
777
+ A
778
+ B
779
+ FIG. S4. Comparison between three different methods for the extraction of the phonon
780
+ dispersion. Methods (1), (2) and (3) are detailed in the Materials and Methods section of the
781
+ main text.
782
+
783
+ 17
784
+ 0.25
785
+ 0.4
786
+
787
+ 90°
788
+ 180°
789
+ 270°
790
+ φ
791
+ q (r.l.u.)
792
+ 0.1
793
+ I21 - T=Tc
794
+ I21 - T<Tc
795
+ M1
796
+ M2
797
+ 2-ID - T=Tc
798
+ 2-ID - T>Tc
799
+ FIG. S5. Comparison of models to experiments at 2-ID and I21. Polar plot contrasting
800
+ M1, M2 models (orange and green solid lines) and the experimental data (blue and red symbols).
801
+ The error bars are obtained from the fits to the phonon dispersion as described in the Materials and
802
+ Methods section. On the left side, the model was adjusted for a higher value of qCO for comparison
803
+ with the data obtained at 2-ID. The data at 2-ID is consistent with M1 and not with M2. In the
804
+ main text only the data from I21 is shown because for those experiments the sample crystal axes
805
+ could be aligned in situ from structural diffraction peaks by using the photodiode detector in that
806
+ chamber. See Materials and Methods section for details.
807
+
808
+ 18
809
+ Medium resolution RIXS
810
+ In Fig. S6 we show the results of medium resolution RIXS done at the qRIXS endstation
811
+ at the Advanced Light Source in the Lawrence Berkeley National Laboratory. The data
812
+ were obtained by integrating the RIXS spectra over the −0.5 to 0.7 eV energy window and
813
+ normalizing them by spectra integrated over all energies, which allows a comparison between
814
+ the three different dopings. The data was also symmetrized about the high-symmetry φ =
815
+ 45◦ direction. In Fig. S6(D-I) the solid lines are fits of a Gaussian function plus a linear
816
+ background to the data. The maps in Fig. S6(A-C) were generated from the fits in Fig. S6(G-
817
+ I), respectively.
818
+ The gray bars in Fig. S6(D-F) are centered at the average radii of the
819
+ correlations, obtained from averaging over φ the peak positions obtained from the fits in
820
+ Fig. S6(G-I). The widths of the grey bars in Fig. S6(D and E) are obtained from the 95%
821
+ confidence intervals obtained from the fits in Fig. S6(G and H), summing them in quadrature
822
+ and taking their square root. The same procedure underestimates the uncertainty for the
823
+ Tc = 54 K. Instead the width of the grey bar in Fig. S6(F) is calculated by taking the lowest
824
+ and largest peak positions over all φ, taking into account the 95% confidence intervals from
825
+ the fits in Fig. S6(I). The width of the grey bar in Fig. S6(F) The data used to generate
826
+ Fig. S6(C, F and I) were used in a previous publication [Boschini et al. Nat. Comm. 12, 1-
827
+ 8 2021]. The new data follows the same experimental procedures as the previously published
828
+ data, so we direct the reader to [Boschini et al. Nat. Comm. 12, 1-8 2021] for further details
829
+ of the experimental procedure.
830
+
831
+ 19
832
+ 0
833
+ 0.2 0.4
834
+ q (r.l.u.)
835
+ 20
836
+ 30
837
+ 40
838
+ 50
839
+ 60
840
+ 70
841
+ 80
842
+ 90
843
+ 100
844
+ 0
845
+ 0.1 0.2 0.3 0.4
846
+ q (r.l.u.)
847
+ 18
848
+ 20
849
+ 22
850
+ 24
851
+ 26
852
+ 28
853
+ 30
854
+ 32
855
+ 34
856
+ 36
857
+ 38
858
+ 40
859
+ Intensity (a.u.)
860
+ 0
861
+ 0.2 0.4
862
+ q (r.l.u.)
863
+ 20
864
+ 30
865
+ 40
866
+ 50
867
+ 60
868
+ 70
869
+ 80
870
+ 90
871
+ 100
872
+ 0
873
+ 0.1 0.2 0.3 0.4
874
+ q (r.l.u.)
875
+ 18
876
+ 20
877
+ 22
878
+ 24
879
+ 26
880
+ 28
881
+ 30
882
+ 32
883
+ 34
884
+ 36
885
+ 38
886
+ 40
887
+ Intensity (a.u.)
888
+ 0
889
+ 0.2 0.4
890
+ q (r.l.u.)
891
+ 20
892
+ 30
893
+ 40
894
+ 50
895
+ 60
896
+ 70
897
+ 80
898
+ 90
899
+ 100
900
+ 0
901
+ 0.1 0.2 0.3 0.4
902
+ q (r.l.u.)
903
+ 18
904
+ 20
905
+ 22
906
+ 24
907
+ 26
908
+ 28
909
+ 30
910
+ 32
911
+ 34
912
+ 36
913
+ 38
914
+ 40
915
+ Intensity (a.u.)
916
+ 100°
917
+ 45°
918
+ -10°
919
+ 0.1
920
+ 0.2
921
+ 0.3
922
+ 0.4
923
+ 22
924
+ 23
925
+ 24
926
+ 25
927
+ 26
928
+ 27
929
+ q (r.l.u.)
930
+ Intensity (a.u.)
931
+ 100°
932
+ 45°
933
+ -10°
934
+ 0.1
935
+ 0.2
936
+ 0.3
937
+ 0.4
938
+ 22
939
+ 23
940
+ 24
941
+ 25
942
+ 26
943
+ 27
944
+ q (r.l.u.)
945
+ Intensity (a.u.)
946
+ 95°
947
+ 45°
948
+ -10°
949
+ 0.1
950
+ 0.2
951
+ 0.3
952
+ 0.4
953
+ 22
954
+ 23
955
+ 24
956
+ 25
957
+ 26
958
+ 27
959
+ q (r.l.u.)
960
+ Intensity (a.u.)
961
+ [-10°, 5°]
962
+ [15°, 25°]
963
+ [35°, 55°]
964
+ [-10°, 10°]
965
+ [20°, 30°]
966
+ [40°, 50°]
967
+ [-10°, 10°]
968
+ [15°, 30°]
969
+ [45°]
970
+ -10°
971
+
972
+
973
+ 15°
974
+ 25°
975
+ 35°
976
+ 45°
977
+ 55°
978
+ 65°
979
+ 75°
980
+ 85°
981
+ 90°
982
+ 95°
983
+ -10°
984
+
985
+ 10°
986
+ 20°
987
+ 30°
988
+ 40°
989
+ 50°
990
+ 60°
991
+ 70°
992
+ 80°
993
+ 90°
994
+ 100°
995
+ -10°
996
+ -5°
997
+
998
+
999
+ 10°
1000
+ 15°
1001
+ 30°
1002
+ 45°
1003
+ 60°
1004
+ 75°
1005
+ 80°
1006
+ 90°
1007
+ 95°
1008
+ 85°
1009
+ 100°
1010
+ Overdoped
1011
+ Tc = 60K
1012
+ Optimally doped
1013
+ Tc = 91K
1014
+ Underdoped
1015
+ Tc = 54K
1016
+ A
1017
+ B
1018
+ C
1019
+ D
1020
+ E
1021
+ F
1022
+ G
1023
+ H
1024
+ I
1025
+ FIG. S6. Doping dependence from medium resolution RIXS (A-C) Normalized energy-
1026
+ integrated RIXS mapping showing high energy quasi-circular electron correlations in overdoped,
1027
+ optimally doped and underdoped samples, respectively. (D-F) q-cuts integrated over different φ
1028
+ ranges, as specified in the legends. (G-I) The normalized energy-integrated RIXS data used to
1029
+ used to generate (A and D), (B and E) and (C and F).
1030
+
1031
+ 20
1032
+ 1.0
1033
+ 0.8
1034
+ 0.6
1035
+ 0.5
1036
+ 0.4
1037
+ 0.3
1038
+ 0.2
1039
+ 0.1
1040
+ 1.0
1041
+ 0.5
1042
+ 0.0
1043
+ 0.8
1044
+ 0.4
1045
+ 0.0
1046
+ q (r.l.u.)
1047
+ Intensity (arb. units)
1048
+ ∫100 meVIRIXS / ∫IRIXS
1049
+ 700 meV
1050
+ Intensity (arb. units)
1051
+ ϕ=0°
1052
+ 25°
1053
+ 30°
1054
+ 35°
1055
+ 45°
1056
+ q (r.l.u.)
1057
+ ϕ = 30°
1058
+ A
1059
+ B
1060
+ FIG. S7. Energy-integrated RIXS maps from high resolution RIXS on Bi2212 under-
1061
+ doped Tc=54 K at I21. A Energy-Loss RIXS spectrum for (q,φ)=(0.28 rlu, 30o). The orange
1062
+ shadow highlights the energy integration window [0.1,0.7] eV. B q-cuts of the RIXS spectra in-
1063
+ tegrated over the energy regions highlighted in A and normalized by the total energy-integrated
1064
+ RIXS signal. The overall q-dependence and position of the maximum is similar to what observed via
1065
+ medium resolution RIXS (see Fig. S6(F and I)). The pink bar reproduces the grey bar in Fig. S6(F),
1066
+ which is obtained from the analysis of the correlations observed with medium resolution RIXS for
1067
+ underdoped Bi2212 (Tc=54 K).
1068
+ [1] Arpaia, R. & Ghiringhelli, G. Charge Order at High Temperature in Cuprate Superconductors.
1069
+ Journal of the Physical Society of Japan 90, 111005 (2021). URL https://doi.org/10.7566/
1070
+ JPSJ.90.111005.
1071
+ [2] Arpaia, R., Caprara, S., Fumagalli, R., De Vecchi, G., Peng, Y. Y., Andersson, E., Betto,
1072
+ D., De Luca, G. M., Brookes, N. B., Lombardi, F., Salluzzo, M., Braicovich, L., Di Castro,
1073
+ C., Grilli, M. & Ghiringhelli, G. Dynamical charge density fluctuations pervading the phase
1074
+
1075
+ 21
1076
+ diagram of a Cu-based high-Tc superconductor. Science 365, 906–910 (2019). URL https:
1077
+ //science.sciencemag.org/content/365/6456/906.
1078
+ [3] Seibold, G., Arpaia, R., Peng, Y. Y., Fumagalli, R., Braicovich, L., Di Castro, C., Grilli, M.,
1079
+ Ghiringhelli, G. C. & Caprara, S. Strange metal behaviour from charge density fluctuations in
1080
+ cuprates. Communications Physics 4, 1–6 (2021). URL https://www.nature.com/articles/
1081
+ s42005-020-00505-z.
1082
+ [4] Caprara, S., Castro, C. D., Mirarchi, G., Seibold, G. & Grilli, M. Dissipation-driven strange
1083
+ metal behavior. Communications Physics 5, 1–7 (2022). URL https://www.nature.com/
1084
+ articles/s42005-021-00786-y.
1085
+ [5] Gurvitch, M. & Fiory, A. T. Resistivity of La1.825Sr0.175CuO4 and YBa2Cu3O7 to 1100 K:
1086
+ Absence of saturation and its implications. Phys. Rev. Lett. 59, 1337–1340 (1987). URL
1087
+ https://link.aps.org/doi/10.1103/PhysRevLett.59.1337.
1088
+ [6] Martin, S., Fiory, A. T., Fleming, R. M., Schneemeyer, L. F. & Waszczak, J. V. Normal-state
1089
+ transport properties of Bi2+xSr2−yCuO6+δ crystals. Phys. Rev. B 41, 846–849 (1990). URL
1090
+ https://link.aps.org/doi/10.1103/PhysRevB.41.846.
1091
+ [7] Varma, C. M., Littlewood, P. B., Schmitt-Rink, S., Abrahams, E. & Ruckenstein, A. E.
1092
+ Phenomenology of the normal state of Cu-O high-temperature superconductors. Phys. Rev.
1093
+ Lett. 63, 1996–1999 (1989). URL https://link.aps.org/doi/10.1103/PhysRevLett.63.
1094
+ 1996.
1095
+ [8] Aji, V. & Varma, C. M. Theory of the Quantum Critical Fluctuations in Cuprate Supercon-
1096
+ ductors. Phys. Rev. Lett. 99, 067003 (2007). URL https://link.aps.org/doi/10.1103/
1097
+ PhysRevLett.99.067003.
1098
+ [9] Patel, A. A., McGreevy, J., Arovas, D. P. & Sachdev, S. Magnetotransport in a Model of a
1099
+ Disordered Strange Metal. Phys. Rev. X 8, 021049 (2018). URL https://link.aps.org/
1100
+ doi/10.1103/PhysRevX.8.021049.
1101
+ [10] Patel, A. A. & Sachdev, S. Theory of a Planckian Metal. Phys. Rev. Lett. 123, 066601 (2019).
1102
+ URL https://link.aps.org/doi/10.1103/PhysRevLett.123.066601.
1103
+ [11] Phillips, P. W., Hussey, N. E. & Abbamonte, P. Stranger than metals. Science 377, eabh4273
1104
+ (2022). URL https://www.science.org/doi/abs/10.1126/science.abh4273.
1105
+ [12] Grissonnanche, G., Fang, Y., Legros, A., Verret, S., Lalibert´e, F., Collignon, C., Zhou, J.,
1106
+ Graf, D., Goddard, P. A., Taillefer, L. & Ramshaw, B. J. Linear-in temperature resistivity
1107
+
1108
+ 22
1109
+ from an isotropic Planckian scattering rate.
1110
+ Nature 595, 667–672 (2021).
1111
+ URL https:
1112
+ //www.nature.com/articles/s41586-021-03697-8.
1113
+ [13] Wahlberg, E., Arpaia, R., Seibold, G., Rossi, M., Fumagalli, R., Trabaldo, E., Brookes,
1114
+ N. B., Braicovich, L., Caprara, S., Gran, U., Ghiringhelli, G., Bauch, T. & Lombardi, F.
1115
+ Restored strange metal phase through suppression of charge density waves in underdoped
1116
+ YBa2Cu3O7−δ. Science 373, 1506–1510 (2021). URL https://www.science.org/doi/abs/
1117
+ 10.1126/science.abc8372.
1118
+ [14] Le Tacon, M. Strange bedfellows inside a superconductor. Science 373, 1438–1439 (2021).
1119
+ URL https://www.science.org/doi/abs/10.1126/science.abi9685.
1120
+ [15] Boschini, F., Minola, M., Sutarto, R., Schierle, E., Bluschke, M., Das, S., Yang, Y., Michiardi,
1121
+ M., Shao, Y. C., Feng, X., Ono, S., Zhong, R. D., Schneeloch, J. A., Gu, G. D., Weschke, E.,
1122
+ He, F., Chuang, Y. D., Keimer, B., Damascelli, A., Frano, A. & da Silva Neto, E. H. Dynamic
1123
+ electron correlations with charge order wavelength along all directions in the copper oxide
1124
+ plane. Nature communications 12, 1–8 (2021). URL https://www.nature.com/articles/
1125
+ s41467-020-20824-7.
1126
+ [16] Yamase, H., Bejas, M. & Greco, A. Electron self-energy from quantum charge fluctuations
1127
+ in the layered t − J model with long-range Coulomb interaction. Phys. Rev. B 104, 045141
1128
+ (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.045141.
1129
+ [17] Bejas, M., Zeyher, R. & Greco, A. Ring-like shaped charge modulations in the t−J model
1130
+ with long-range Coulomb interaction. Phys. Rev. B 106, 224512 (2022). URL https://link.
1131
+ aps.org/doi/10.1103/PhysRevB.106.224512.
1132
+ [18] Chaix, L., Ghiringhelli, G., Peng, Y. Y., Hashimoto, M., Moritz, B., Kummer, K., Brookes,
1133
+ N. B., He, Y., Chen, S., Ishida, S., Yoshida, Y., Eisaki, H., Salluzzo, M., Braicovich, L.,
1134
+ Shen, Z. X., Devereaux, T. P. & Lee, W. S. Dispersive charge density wave excitations in
1135
+ Bi2Sr2CaCu2O8+δ. Nature Physics 13, 952–956 (2017). URL https://doi.org/10.1038/
1136
+ nphys4157.
1137
+ [19] Lin, J. Q., Miao, H., Mazzone, D. G., Gu, G. D., Nag, A., Walters, A. C., Garc´ıa-Fern´andez,
1138
+ M., Barbour, A., Pelliciari, J., Jarrige, I., Oda, M., Kurosawa, K., Momono, N., Zhou, K.-
1139
+ J., Bisogni, V., Liu, X. & Dean, M. P. M.
1140
+ Strongly Correlated Charge Density Wave in
1141
+ La2−xSrxCuO4 Evidenced by Doping-Dependent Phonon Anomaly.
1142
+ Phys. Rev. Lett. 124,
1143
+ 207005 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.124.207005.
1144
+
1145
+ 23
1146
+ [20] Peng, Y. Y., Husain, A. A., Mitrano, M., Sun, S. X.-L., Johnson, T. A., Zakrzewski, A. V.,
1147
+ MacDougall, G. J., Barbour, A., Jarrige, I., Bisogni, V. & Abbamonte, P. Enhanced Electron-
1148
+ Phonon Coupling for Charge-Density-Wave Formation in La1.8−xEu0.2SrxCuO4+δ. Phys. Rev.
1149
+ Lett. 125, 097002 (2020).
1150
+ URL https://link.aps.org/doi/10.1103/PhysRevLett.125.
1151
+ 097002.
1152
+ [21] Li, J., Nag, A., Pelliciari, J., Robarts, H., Walters, A., Garcia-Fernandez, M., Eisaki, H.,
1153
+ Song, D., Ding, H., Johnston, S., Comin, R. & Zhou, K.-J.
1154
+ Multiorbital charge-density
1155
+ wave excitations and concomitant phonon anomalies in Bi2Sr2LaCuO6+δ. Proceedings of the
1156
+ National Academy of Sciences 117, 16219–16225 (2020). URL https://www.pnas.org/doi/
1157
+ abs/10.1073/pnas.2001755117.
1158
+ [22] Wang, Q., von Arx, K., Horio, M., Mukkattukavil, D. J., K¨uspert, J., Sassa, Y., Schmitt, T.,
1159
+ Nag, A., Pyon, S., Takayama, T., Takagi, H., Garcia-Fernandez, M., Zhou, K.-J. & Chang,
1160
+ J.
1161
+ Charge order lock-in by electron-phonon coupling in La1.675Eu0.2Sr0.125CuO4.
1162
+ Science
1163
+ Advances 7, eabg7394 (2021). URL https://www.science.org/doi/abs/10.1126/sciadv.
1164
+ abg7394.
1165
+ [23] Lee, W. S., Zhou, K.-J., Hepting, M., Li, J., Nag, A., Walters, A. C., Garcia-Fernandez,
1166
+ M., Robarts, H. C., Hashimoto, M., Lu, H., Nosarzewski, B., Song, D., Eisaki, H., Shen,
1167
+ Z. X., Moritz, B., Zaanen, J. & Devereaux, T. P. Spectroscopic fingerprint of charge order
1168
+ melting driven by quantum fluctuations in a cuprate. Nature Physics 17, 53–57 (2021). URL
1169
+ https://www.nature.com/articles/s41567-020-0993-7.
1170
+ [24] Lu, H., Hashimoto, M., Chen, S.-D., Ishida, S., Song, D., Eisaki, H., Nag, A., Garcia-
1171
+ Fernandez, M., Arpaia, R., Ghiringhelli, G., Braicovich, L., Zaanen, J., Moritz, B., Kummer,
1172
+ K., Brookes, N. B., Zhou, K.-J., Shen, Z.-X., Devereaux, T. P. & Lee, W.-S. Identification of
1173
+ a characteristic doping for charge order phenomena in Bi-2212 cuprates via RIXS. Phys. Rev.
1174
+ B 106, 155109 (2022). URL https://link.aps.org/doi/10.1103/PhysRevB.106.155109.
1175
+ [25] Arpaia, R., Martinelli, L., Sala, M. M., Caprara, S., Nag, A., Brookes, N. B., Camisa, P.,
1176
+ Li, Q., Gao, Q., Zhou, X., Garcia-Fernandez, M., Zhou, K. J., Schierle, E., Bauch, T., Peng,
1177
+ Y. Y., Di Castro, C., Grilli, M., Lombardi, F., Braicovich, L. & Ghiringhelli, G. Signature
1178
+ of quantum criticality in cuprates by charge density fluctuations. arXiv 2208.13918 (2022).
1179
+ URL https://arxiv.org/abs/2208.13918.
1180
+ [26] Brazovskii, S. A. Phase transition of an isotropic system to a nonuniform state. Sov. Phys.
1181
+
1182
+ 24
1183
+ JETP 41, 85–89 (1974). URL http://www.jetp.ac.ru/cgi-bin/e/index/e/41/1/p85?a=
1184
+ list.
1185
+ [27] Braicovich, L., Rossi, M., Fumagalli, R., Peng, Y., Wang, Y., Arpaia, R., Betto, D., De Luca,
1186
+ G. M., Di Castro, D., Kummer, K., Moretti Sala, M., Pagetti, M., Balestrino, G., Brookes,
1187
+ N. B., Salluzzo, M., Johnston, S., van den Brink, J. & Ghiringhelli, G.
1188
+ Determining the
1189
+ electron-phonon coupling in superconducting cuprates by resonant inelastic x-ray scattering:
1190
+ Methods and results on Nd1+xBa2−xCu3O7−δ. Phys. Rev. Research 2, 023231 (2020). URL
1191
+ https://link.aps.org/doi/10.1103/PhysRevResearch.2.023231.
1192
+ [28] Abanov, A., Chubukov, A. V. & Schmalian, J. Quantum-critical theory of the spin-fermion
1193
+ model and its application to cuprates: Normal state analysis. Advances in Physics 52, 119–218
1194
+ (2003). URL https://doi.org/10.1080/0001873021000057123.
1195
+ [29] da Silva Neto, E. H., Minola, M., Yu, B., Tabis, W., Bluschke, M., Unruh, D., Suzuki, H.,
1196
+ Li, Y., Yu, G., Betto, D., Kummer, K., Yakhou, F., Brookes, N. B., Le Tacon, M., Greven,
1197
+ M., Keimer, B. & Damascelli, A.
1198
+ Coupling between dynamic magnetic and charge-order
1199
+ correlations in the cuprate superconductor Nd2−xCexCuO4. Phys. Rev. B 98, 161114 (2018).
1200
+ URL https://link.aps.org/doi/10.1103/PhysRevB.98.161114.
1201
+ [30] Kang, M., Pelliciari, J., Frano, A., Breznay, N., Schierle, E., Weschke, E., Sutarto, R., He,
1202
+ F., Shafer, P., Arenholz, E., Chen, M., Zhang, K., Ruiz, A., Hao, Z., Lewin, S., Analytis, J.,
1203
+ Krockenberger, Y., Yamamoto, H., Das, T. & Comin, R. Evolution of charge order topology
1204
+ across a magnetic phase transition in cuprate superconductors. Nature Physics 15, 335–340
1205
+ (2019). URL https://doi.org/10.1038/s41567-018-0401-8.
1206
+ [31] Zhou, K.-J., Walters, A., Garcia-Fernandez, M., Rice, T., Hand, M., Nag, A., Li, J., Agrestini,
1207
+ S., Garland, P., Wang, H., Alcock, S., Nistea, I., Nutter, B., Rubies, N., Knap, G., Gaughran,
1208
+ M., Yuan, F., Chang, P., Emmins, J. & Howell, G. I21: an advanced high-resolution reso-
1209
+ nant inelastic X-ray scattering beamline at Diamond Light Source. Journal of Synchrotron
1210
+ Radiation 29, 563–580 (2022). URL https://doi.org/10.1107/S1600577522000601.
1211
+ [32] Wang, L., He, G., Yang, Z., Garcia-Fernandez, M., Nag, A., Zhou, K., Minola, M., Tacon,
1212
+ M. L., Keimer, B., Peng, Y. et al. Paramagnons and high-temperature superconductivity in
1213
+ a model family of cuprates. Nature Communications 13, 3163 (2022). URL https://www.
1214
+ nature.com/articles/s41467-022-30918-z.
1215
+ [33] Yang, H.-B., Rameau, J., Johnson, P., Valla, T., Tsvelik, A. & Gu, G. Emergence of preformed
1216
+
1217
+ 25
1218
+ Cooper pairs from the doped Mott insulating state in Bi2Sr2CaCu2O8+δ. Nature 456, 77–80
1219
+ (2008). URL https://www.nature.com/articles/nature07400.
1220
+
2NFAT4oBgHgl3EQfDRyP/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2tE2T4oBgHgl3EQfjAfh/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90054beb074d31e1177cd818e31bacad636e1d0acd38aecfa62ad9ccaf49964e
3
+ size 4718637
39AzT4oBgHgl3EQfD_qi/content/tmp_files/2301.00986v1.pdf.txt ADDED
@@ -0,0 +1,1464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
2
+ Hasan Abed Al Kader Hammoud1
3
+ Shuming Liu1
4
+ Mohammad Alkhrasi2
5
+ Fahad AlBalawi2
6
+ Bernard Ghanem1
7
+ 1 King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
8
+ 2 Saudi Data and Artificial Intelligence Authority (SDAIA), Riyadh, Saudi Arabia
9
+ {hasanabedalkader.hammoud,shuming.liu,bernard.ghanem} @kaust.edu.sa
10
+ {mkhrashi,falbalawi} @sdaia.gov.sa
11
+ Abstract
12
+ Deep neural networks (DNNs) are vulnerable to a class
13
+ of attacks called “backdoor attacks”, which create an as-
14
+ sociation between a backdoor trigger and a target label the
15
+ attacker is interested in exploiting. A backdoored DNN per-
16
+ forms well on clean test images, yet persistently predicts an
17
+ attacker-defined label for any sample in the presence of the
18
+ backdoor trigger. Although backdoor attacks have been ex-
19
+ tensively studied in the image domain, there are very few
20
+ works that explore such attacks in the video domain, and
21
+ they tend to conclude that image backdoor attacks are less
22
+ effective in the video domain. In this work, we revisit the
23
+ traditional backdoor threat model and incorporate addi-
24
+ tional video-related aspects to that model. We show that
25
+ poisoned-label image backdoor attacks could be extended
26
+ temporally in two ways, statically and dynamically, leading
27
+ to highly effective attacks in the video domain. In addition,
28
+ we explore natural video backdoors to highlight the seri-
29
+ ousness of this vulnerability in the video domain. And, for
30
+ the first time, we study multi-modal (audiovisual) backdoor
31
+ attacks against video action recognition models, where we
32
+ show that attacking a single modality is enough for achiev-
33
+ ing a high attack success rate.
34
+ 1. Introduction
35
+ A fundamental requirement for the deployment of deep
36
+ neural networks (DNNs) in real-world tasks is their safety
37
+ and robustness against possible vulnerabilities and security
38
+ breaches. This requirement is, in essence, the motivation
39
+ behind exploring adversarial attacks. One particularly in-
40
+ teresting adversarial attack is “backdoor attacks”. Backdoor
41
+ attacks or neural trojan attacks explore the scenario in which
42
+ a user with limited computational capabilities downloads
43
+ pretrained DNNs from an untrusted party or outsources the
44
+ training procedure to such a party that we refer to as the ad-
45
+ versary. The adversary provides the user with a model that
46
+ performs well on an unseen validation set, but produces a
47
+ pre-defined class label in the presence of an attacker-defined
48
+ trigger called the backdoor trigger. The association between
49
+ the backdoor trigger and the attacker-specified label is cre-
50
+ ated by training the DNN on poisoned training samples,
51
+ which are samples polluted by the attacker’s trigger [39].
52
+ In poisoned-label attacks, unlike clean-label attacks, the at-
53
+ tacker also switches the label of the poisoned samples to the
54
+ intended target label.
55
+ Considerable attention has been paid to explore back-
56
+ door attacks and defenses for 2D image classification mod-
57
+ els [5,22,25]. However, little attention has been paid to ex-
58
+ ploring backdoor attacks and defenses against video action
59
+ recognition models. The disappointing conclusion uncov-
60
+ ered by [87] regarding the limited effectiveness of image
61
+ backdoor attacks on videos stunted further development of
62
+ video backdoor attacks. Unfortunately, the attacks consid-
63
+ ered in [87] were limited to only visible patch-based clean-
64
+ label attacks. Moreover, [87] directly adopted the 2D back-
65
+ door attack threat model without incorporating important
66
+ video-specific considerations.
67
+ To this end, and as opposed to [87]. we first revisit and
68
+ revise the commonly adopted 2D poisoned-label backdoor
69
+ threat model by incorporating additional constraints that are
70
+ inherently imposed by video systems.
71
+ These constraints
72
+ arise due to the presence of the temporal dimension. We
73
+ then explore two ways to extend image backdoor attacks to
74
+ incorporate the temporal dimension into the attack to enable
75
+ more video-specific backdoor attacks. In particular, image
76
+ backdoor attacks could be either extended statically by ap-
77
+ plying the same attack to each frame of the video or dynam-
78
+ ically by adjusting the attack parameters differently for each
79
+ frame. Then, three novel natural video backdoor attacks are
80
+ presented to highlight the seriousness of the risks associ-
81
+ ated with backdoor attacks in the video domain. We then
82
+ test the attacked models against three 2D backdoor defenses
83
+ and discuss the reason behind the failure of those methods.
84
+ arXiv:2301.00986v1 [cs.CV] 3 Jan 2023
85
+
86
+ We also study, for the first time, audiovisual backdoor at-
87
+ tacks, where we ablate the importance and contribution of
88
+ each modality on the performance of the attack for both late
89
+ and early fusion settings. We show that attacking a single
90
+ modality is enough to achieve a high attack success rate.
91
+ Contributions. Our contributions are twofold. (1) We re-
92
+ visit the traditional backdoor attack threat model and incor-
93
+ porate video-related aspects, such as video subsampling and
94
+ spatial cropping, into the model. We also extend existing
95
+ image backdoor attacks to the video domain in two differ-
96
+ ent ways, statically and dynamically, after which we pro-
97
+ pose three novel natural video backdoor attacks. Through
98
+ extensive experiments, we provide evidence that the previ-
99
+ ous perception of image backdoor attacks in the video do-
100
+ main is not necessarily true, especially in the poisoned-label
101
+ attack setup. (2) To the best of our knowledge, this work is
102
+ the first to investigate audiovisual backdoor attacks against
103
+ video action recognition models.
104
+ 2. Related Work
105
+ Backdoor Attacks. Backdoor attacks were first introduced
106
+ in [22]. The attack, called BadNet, was based on adding a
107
+ patch to the corner of a subset of training images to create a
108
+ backdoor that could be triggered by the attacker at will. Fol-
109
+ lowing BadNet, [44] proposed optimizing for the values of
110
+ the patch to obtain a more effective backdoor attack. Shortly
111
+ after the development of patch-based backdoor attacks, the
112
+ community realized the importance of adding an invisibility
113
+ constraint to the design of backdoor triggers to bypass any
114
+ human inspection. Works such as [9] proposed blending
115
+ the backdoor trigger with the image rather than stamping
116
+ it. [37] generated backdoor attacks using the least signifi-
117
+ cant bit algorithm. [52] generated warping fields to warp the
118
+ image content as a backdoor trigger. [14] went one step fur-
119
+ ther and designed learnable transformations to generate op-
120
+ timal backdoor triggers. After many attacks were proposed
121
+ in the spatial domain [10, 37, 40, 45, 56, 57, 67, 72, 75], and
122
+ others in the latent representation domain [13,53,80,88,91],
123
+ [19, 25, 71, 82, 84] proposed to switch attention to the fre-
124
+ quency domain. [25] utilized frequency heatmaps proposed
125
+ in [81] to create backdoor attacks that target the most sen-
126
+ sitive frequency components of the network. [19] proposed
127
+ blending low frequency content from a trigger image with
128
+ training images as a poisoning technique. In our work, we
129
+ extend the 2D backdoor threat model to the video domain
130
+ by incorporating video-related aspects into it. We also ex-
131
+ tend five image backdoor attacks into the video domain and
132
+ propose three natural video backdoor attacks.
133
+ Backdoor Defenses. Backdoor attack literature was im-
134
+ mediately opposed by various defenses.
135
+ Backdoor de-
136
+ fenses are generally of five types:
137
+ preprocessing-based
138
+ [12,47,55], model reconstruction-based [38,42,74,83,89],
139
+ trigger synthesis-based [23,24,29,43,54,58,62,68], model
140
+ diagonsis-based [15, 34, 46, 76, 90], and sample-filtering
141
+ based [8, 20, 26, 30, 61, 63]. Early backdoor defenses such
142
+ as [68] hypothesized that backdoor attacks create a short-
143
+ cut between all samples and the poisoned class. Based on
144
+ that, they solved an optimization problem to find whether a
145
+ trigger of an abnormally small norm exists that would flip
146
+ all samples to one label. Later, multiple improved itera-
147
+ tions of this method were proposed, such as [23, 43, 83].
148
+ Fine pruning [42] suggested that the backdoor is triggered
149
+ by particular neurons that are dormant in the absence of
150
+ the trigger. Therefore, the authors proposed pruning the
151
+ least active neurons on clean samples. STRIP [20] showed
152
+ that blending clean samples with other clean samples would
153
+ yield a higher entropy compared to when clean images are
154
+ blended with poisoned samples. Activation clustering [8]
155
+ uses KMeans to cluster the activations of an inspection, a
156
+ potentially poisoned data set, into two clusters. A large sil-
157
+ houette distance between the two clusters would uncover
158
+ the poisoned samples. In our work, we show that current
159
+ image backdoor attacks have limited effectiveness in de-
160
+ fending against backdoor attacks in the video domain, es-
161
+ pecially against the proposed natural video attacks.
162
+ Video Action Recognition. Video action recognition mod-
163
+ els, which only leverage the raw frames of a video, can be
164
+ categorized into two categories, CNN-based networks and
165
+ transformer-based networks. 2D CNN-based methods are
166
+ built on top of pretrained image recognition networks with
167
+ well-designed modules to capture the temporal relationship
168
+ between multiple frames [41,49,69,70]. Those methods are
169
+ computationally efficient as they use 2D convolutional ker-
170
+ nels. To learn stronger spatial-temporal representations, 3D
171
+ CNN-based methods were proposed. These methods utilize
172
+ 3D kernels to jointly leverage the spatio-temporal context
173
+ within a video clip [17, 18, 64, 65]. To better initialize the
174
+ network, I3D [7] inflated the weights of 2D pretrained im-
175
+ age recognition models to adapt them to 3D CNNs. Real-
176
+ izing the importance of computational efficiency, S3D [78]
177
+ and R(2+1)D [66] proposed to disentangle spatial and tem-
178
+ poral convolutions to reduce computational cost. Recently,
179
+ transformer-based action recognition models were able to
180
+ achieve better performance in large training data sets com-
181
+ pared to CNN-based models, e.g. [4,6,16,48]. In this work,
182
+ we test backdoor attacks against three action recognition
183
+ architectures, namely I3D, SlowFast, and TSM.
184
+ Audiovisual Action Recognition. In addition to frames, a
185
+ line of action recognition models [1,27,28,51] has used the
186
+ accompanying audio to better understand activities such as
187
+ “playing music” or “washing dishes”. To take advantage of
188
+ existing CNN and transformer-based models, the Log-Mel
189
+ spectrogram was introduced to convert audio data from a
190
+ non-structured signal into a 2D representation in time and
191
+ frequency usable by these models [2,3,35,77]. Current au-
192
+ diovisual action recognition methods are divided into two
193
+
194
+ Figure 1. Traditional Backdoor Attack Pipeline. After selecting a backdoor trigger and a target label, the attacker poisons a subset of
195
+ the training data referred to as the poisoned dataset (Dp). The label of the poisoned dataset is fixed to a target poisoning label specified by
196
+ the attacker. The attacker trains jointly on clean (non-poisoned) samples (Dc) and poisoned samples leading to a backdoored model, which
197
+ outputs the target label in the presence of the backdoor trigger.
198
+ categories based on when the audio and visual signals are
199
+ merged in the recognition pipeline: early fusion and late fu-
200
+ sion. Early fusion combines features before classification,
201
+ which can better capture features [32,77]. The disadvantage
202
+ of early fusion is that there is a higher risk of overfitting to
203
+ the training data [59]. Late fusion, on the other hand, treats
204
+ the video and audio networks separately, and the predictions
205
+ of each network are carried out independently, after which
206
+ the logits are aggregated to make a final prediction [21]. For
207
+ the first time, we test backdoor attacks against audiovisual
208
+ action recognition networks in both late and early fusion
209
+ setups.
210
+ 3. Video Backdoor Attacks
211
+ 3.1. The Traditional Threat Model
212
+ The commonly adopted threat model for backdoor at-
213
+ tacks dates back to the works that studied those attacks
214
+ against 2D image classification models [22]. The victim
215
+ outsources the training process to a trainer who is given ac-
216
+ cess to both the victim’s training data and the network ar-
217
+ chitecture. The victim only accepts the model provided by
218
+ the trainer if it performs well on the victim’s private val-
219
+ idation set. The attacker aims to maximize the effective-
220
+ ness of the embedded backdoor attack [39]. We refer to
221
+ the model’s performance on the validation set as clean data
222
+ accuracy (CDA). The effectiveness of the backdoor attack
223
+ is measured by the attack success rate (ASR), which is de-
224
+ fined as the percentage of test examples not labeled as the
225
+ target class that are classified as the target class when the
226
+ backdoor pattern is applied. To achieve this goal, the at-
227
+ tacker applies a backdoor trigger to a subset of the train-
228
+ ing images and then, in the poisoned-label setup, switches
229
+ the labels of those images to a target class of choice before
230
+ training begins. A more powerful backdoor attack is one
231
+ that is visually imperceptible (usually measured in terms of
232
+ ℓ2/ℓ∞-norm, PSNR, SSIM, or LPIPS) but achieves both a
233
+ high CDA and a high ASR. This is summarized in Figure 1.
234
+ More formally, we denote the classifier which is param-
235
+ eterized by θ as fθ : X → Y. It maps the input x ∈ X,
236
+ such as images or videos, to class labels y ∈ Y.
237
+ Let
238
+ Gη : X → X indicate an attacker-specific poisoned im-
239
+ age generator that is parameterized by some trigger-specific
240
+ parameters η. The generator may be image-dependent. Fi-
241
+ nally, let S : Y → Y be an attacker-specified label shifting
242
+ function. In our case, we consider the scenario in which the
243
+ attacker is trying to flip all the labels into one particular la-
244
+ bel, i.e. S : Y → t, where t ∈ Y is an attacker-specified
245
+ label that will be activated in the presence of the backdoor
246
+ trigger. Let D = {(xi, yi)}N
247
+ i=1 indicate the training dataset.
248
+ The attacker splits D into two subsets, a clean subset Dc and
249
+ a poisoned subset Dp, whose images are poisoned by Gη
250
+ and labels are poisoned by S. The poisoning rate is the ra-
251
+ tio α = |Dp|
252
+ |D| , generally a lower poisoning rate is associated
253
+ with a higher clean data accuracy. The attacker typically
254
+ trains the network by minimizing the cross-entropy loss on
255
+ Dc∪Dp, i.e. minimizes E(x,y)∼Dc∪Dp[LCE(fθ(x), y)]. The
256
+ attacker aims to achieve high accuracy on the user’s valida-
257
+ tion set Dval while being able to trigger the poisoned-label,
258
+ t, in the presence of the backdoor trigger, i.e. fθ(Gη(x)) =
259
+ t, ∀x ∈ X (ideally).
260
+ 3.2. From Images to Videos
261
+ Unlike images, videos have an additional dimension, the
262
+ temporal dimension. This dimension introduces new rules
263
+ to the game between the attacker and the victim.
264
+ More
265
+
266
+ Settings
267
+ Training Stage
268
+ Inference Stage
269
+ "Eat"
270
+ "Jump'
271
+ fe
272
+ fe
273
+ Poisoned
274
+ Network
275
+ Target Label = "Eat"
276
+ Backdoor Trigger
277
+ Clean Samples (Dc)
278
+ Poisoned Video (Gn(α))
279
+ (α)
280
+ Label = "Eat"
281
+ Clean Video
282
+ Ground Truth LabelFigure 2. Static vs Dynamic Backdoor Attacks. Static backdoor attacks apply the same trigger across all frames along the temporal
283
+ dimension. On the other hand, dynamic attacks apply a different trigger per frame along the temporal dimension.
284
+ precisely, the attacker now has an additional dimension to
285
+ hide the backdoor trigger, leading to a higher level of im-
286
+ perceptibility.
287
+ The backdoor attack could be applied to
288
+ all the frames or a subset of the frames statically, i.e. the
289
+ same trigger is applied to each frame, or dynamically, i.e.
290
+ a different trigger is applied to each frame. On the other
291
+ hand, the testing pipeline now imposes harsher conditions
292
+ against the backdoor attack. Video recognition models tend
293
+ to test the model on multiple sub-sampled clips with vari-
294
+ ous crops [7,18,41] which might, in turn, destroy the back-
295
+ door trigger.
296
+ For example, if the trigger is applied to a
297
+ single frame, it might not be sampled, and if the trigger
298
+ is applied to the corner of the image, it might be cropped
299
+ out. The threat model presented in Subsection 3.1 was di-
300
+ rectly adopted in [87], which to the best of our knowledge,
301
+ is the only previous work that considered backdoor attacks
302
+ for video action recognition.
303
+ Our work sheds light on the aforementioned video-
304
+ related aspects. In Section 4.2, we show the effect of the
305
+ number of frames poisoned on CDA and ASR. We also
306
+ show how existing 2D methods could be extended both stat-
307
+ ically and dynamically to suit the video domain. For exam-
308
+ ple, BadNet [22] applies a fixed patch as a backdoor trigger.
309
+ The patch could be applied statically using the same pixel
310
+ values and the same position along the temporal dimension
311
+ or applied dynamically by changing the position and possi-
312
+ bly the pixel values of the patch for each frame. Figure 2
313
+ shows a BadNet attack when applied in a static and dynamic
314
+ way. Additionally, we show how simple yet natural video
315
+ “artifacts” could be used as backdoor triggers. More specif-
316
+ ically, lag in a video, motion blur, and compression glitches
317
+ could all be used as naturally occurring backdoor triggers.
318
+ 3.3. Audiovisual Backdoor Attacks
319
+ Videos are naturally accompanied by audio signals. Sim-
320
+ ilarly to how the video modality could be attacked, the audio
321
+ signal could also be attacked. The interesting question that
322
+ arises is how backdoor attacks would perform in a multi-
323
+ modal setup. In the experiments of Section 4.4, we answer
324
+ the following questions: (1) What is the effect of having two
325
+ attacked modalities on CDA and ASR?; (2) What happens
326
+ if only one modality is attacked and the other is left clean?;
327
+ (3) What is the difference in performance between late and
328
+ early fusion in terms of CDA and ASR?
329
+ 4. Experiments
330
+ 4.1. Experimental Settings
331
+ Datasets. We consider three standard benchmark datasets
332
+ used in video action recognition: UCF-101 [60], HMDB-
333
+ 51 [36], and Kinetics-Sounds [31]. Kinetics-Sounds is a
334
+ subset of Kinetics400 that contains classes that can be clas-
335
+ sified from the audio signal, i.e. classes where audio is use-
336
+ ful for action recognition [2]. Kinetics-Sounds is particu-
337
+ larly interesting for Sections 4.3 and 4.4, where we explore
338
+ backdoor attacks against audio and audiovisual classifiers.
339
+ Network Architectures. Following common practice, for
340
+ the visual modality, we use a dense sampling strategy to
341
+ sub-sample 32 frames per video to fine-tune a pretrained
342
+ I3D network on the target dataset [7]. In Section 4.2, we
343
+ also show results using TSM [41] and SlowFast [18] net-
344
+ works. All three models adopt ResNet-50 as the backbone
345
+ and are pretrained on Kinetics-400. Similarly to [2], for
346
+ the audio modality, a ResNet-18 is trained from scratch on
347
+ Mel-Spectrograms composed of 80 Mel bands sub-sampled
348
+ temporally to a fixed length of 256.
349
+ Attack Setting.
350
+ For the video modality, we study and
351
+ extend the following image-based backdoor attacks to the
352
+ video domain: BadNet [22], Blend [9], SIG [5], WaNet
353
+ [52], and FTrojan [71]. We also explore three additional
354
+ natural video backdoor attacks.
355
+ For the audio modality,
356
+ we consider two attacks: sine attack and high-frequency
357
+ noise attack, both of which we explain later.
358
+ Following
359
+ [22,25,52], the target class is arbitrarily set to the first class
360
+
361
+ t=0
362
+ t=1
363
+ t=12
364
+ t=13
365
+ t=14
366
+ t=N-1
367
+ t=N
368
+ Static
369
+ DynamicFigure 3. Visualization of 2D Backdoor Attacks. Image backdoor attacks mainly differ according to the backdoor trigger used to poison
370
+ the training samples. They could be extended either statically or dynamically based on how the attack is applied across the frames.
371
+ of each data set (class 0), and the poisoning rate is set to
372
+ 10%. Unless otherwise stated, the considered image back-
373
+ door attacks poison all frames of the sampled clips during
374
+ training and evaluation.
375
+ Evaluation Metrics. As is commonly done in the back-
376
+ door literature, we evaluate the performance of the model
377
+ using clean data accuracy (CDA) and attack success rate
378
+ (ASR) explained in Section 3. CDA represents the usual
379
+ validation/test accuracy on an unseen dataset hence mea-
380
+ suring the generalizability of the model. On the other hand,
381
+ ASR measures the effectiveness of the attack when the poi-
382
+ son is applied to the validation/test set.
383
+ In addition, we
384
+ test the attacked models against some of the early 2D back-
385
+ door defenses, more precisely against activation clustering
386
+ (AC) [8], STRIP [20], and pruning [42].
387
+ Implementation Details. Our method is built on MMAc-
388
+ tion2 library [11], and follows their default training config-
389
+ urations and testing protocols, except for the learning rate
390
+ and the number of training epochs (check Supplementary).
391
+ All experiments were run using 4 NVIDIA A100 GPUs.
392
+ 4.2. Video Backdoor Attacks
393
+ Extending Image Backdoor Attacks to the Video Do-
394
+ main. As mentioned in Section 3.2, image backdoor attacks
395
+ could be extended either statically by applying an attack in
396
+ the same way across all frames or dynamically by adjusting
397
+ the attack parameters for different frames. We consider five
398
+ attacks that differ according to the applied backdoor trig-
399
+ ger. BadNet applies a patch as a trigger, Blend blends a
400
+ trigger image to the original image, SIG superimposes a si-
401
+ nusoidal trigger to the image, WaNet warps the content of
402
+ the image, and FTrojan poisons a high- and mid- frequency
403
+ component in the discrete cosine transform (DCT). Figure 3
404
+ visualizes all five attacks on the same video frame. Each of
405
+ the considered methods could be extended dynamically as
406
+ follows: BadNet: change the patch location for each frame;
407
+ Blend: blend a uniform noise that is different per frame;
408
+ SIG: change the frequency of the sine component superim-
409
+ posed with each frame; WaNet: generate a different warp-
410
+ ing field for each frame; FTrojan: select a different DCT
411
+ basis to perturb at each frame. Note that Blend and FTro-
412
+ jan are generally imperceptible. Visualizations and saliency
413
+ UCF101
414
+ HMDB51
415
+ KineticsSound
416
+ CDA(%)
417
+ ASR(%)
418
+ CDA(%)
419
+ ASR(%)
420
+ CDA(%)
421
+ ASR(%)
422
+ Baseline
423
+ 93.95
424
+ -
425
+ 69.59
426
+ -
427
+ 81.41
428
+ -
429
+ BadNet
430
+ 93.95
431
+ 99.63
432
+ 69.35
433
+ 98.89
434
+ 82.97
435
+ 99.09
436
+ Blend
437
+ 94.29
438
+ 99.26
439
+ 68.37
440
+ 86.73
441
+ 82.12
442
+ 97.54
443
+ SIG
444
+ 93.97
445
+ 99.97
446
+ 68.50
447
+ 99.80
448
+ 82.84
449
+ 99.87
450
+ WaNet
451
+ 94.05
452
+ 99.84
453
+ 68.95
454
+ 99.61
455
+ 82.38
456
+ 99.09
457
+ FTrojan
458
+ 94.16
459
+ 99.34
460
+ 68.10
461
+ 97.52
462
+ 82.45
463
+ 97.86
464
+ Table 1. Statically Extended 2D Backdoor Attacks. Statically
465
+ extending 2D backdoor attacks to the video domain leads to high
466
+ CDA and ASR across all three considered datasets.
467
+ UCF101
468
+ HMDB51
469
+ KineticsSound
470
+ CDA(%)
471
+ ASR(%)
472
+ CDA(%)
473
+ ASR(%)
474
+ CDA(%)
475
+ ASR(%)
476
+ Baseline
477
+ 93.95
478
+ -
479
+ 69.59
480
+ -
481
+ 81.41
482
+ -
483
+ BadNet
484
+ 94.11
485
+ 99.97
486
+ 69.08
487
+ 99.54
488
+ 82.25
489
+ 99.74
490
+ Blend
491
+ 94.21
492
+ 99.44
493
+ 67.03
494
+ 95.95
495
+ 81.67
496
+ 95.79
497
+ SIG
498
+ 94.24
499
+ 100.00
500
+ 68.63
501
+ 100.00
502
+ 82.84
503
+ 100.00
504
+ WaNet
505
+ 94.29
506
+ 99.79
507
+ 69.22
508
+ 99.80
509
+ 82.25
510
+ 99.61
511
+ FTrojan
512
+ 94.16
513
+ 99.34
514
+ 67.19
515
+ 98.69
516
+ 82.25
517
+ 95.27
518
+ Table 2. Dynamically Extended 2D Backdoor Attacks. Dynam-
519
+ ically extending 2D backdoor attacks to the video domain leads to
520
+ high CDA and ASR across all three considered datasets.
521
+ maps for each attack are found in the Supplementary.
522
+ Tables 1 and 2 show the CDA and ASR of the I3D mod-
523
+ els attacked using various backdoor attacks on UCF-101,
524
+ HMDB-51, and Kinetics-Sounds. Contrary to the conclu-
525
+ sion presented in [87], we find that backdoor attacks are
526
+ actually highly effective in the video domain. The CDA
527
+ of the attacked models is very similar to that of the clean
528
+ unattacked model (baseline), surpassing it in some cases.
529
+ Extending attacks dynamically, almost always, improves
530
+ CDA and ASR compared to extending them statically.
531
+ Natural Video Backdoors. A more interesting attack is
532
+ one that seems natural and could bypass human inspec-
533
+ tion [50, 73, 79, 86]. There are several natural “glitches”
534
+ that occur in the video domain and that one could exploit
535
+ to design a natural backdoor attack. For example, videos
536
+ might contain some frame lag, motion blur, video compres-
537
+ sion corruptions, camera focus/defocus, etc. In Table 3, we
538
+ report the CDA and ASR of three natural backdoor attacks:
539
+
540
+ Clean
541
+ BadNet
542
+ Blend
543
+ SIG
544
+ WaNet
545
+ FTrojanUCF101
546
+ HMDB51
547
+ KineticsSound
548
+ CDA(%)
549
+ ASR(%)
550
+ CDA(%)
551
+ ASR(%)
552
+ CDA(%)
553
+ ASR(%)
554
+ Baseline
555
+ 93.95
556
+ -
557
+ 69.59
558
+ -
559
+ 81.41
560
+ -
561
+ Frame Lag
562
+ 92.94
563
+ 97.20
564
+ 68.04
565
+ 98.76
566
+ 82.51
567
+ 98.19
568
+ Video Corrupt.
569
+ 94.26
570
+ 99.87
571
+ 69.22
572
+ 99.22
573
+ 81.74
574
+ 98.51
575
+ Motion Blur
576
+ 93.97
577
+ 99.92
578
+ 68.17
579
+ 97.52
580
+ 82.19
581
+ 99.22
582
+ Table 3.
583
+ Natural Video Backdoor Attacks.
584
+ Natural attacks
585
+ against video action recognition models could achieve high CDA
586
+ and ASR while looking completely natural to human inspection.
587
+ SlowFast
588
+ TSM
589
+ CDA(%)
590
+ ASR(%)
591
+ CDA(%)
592
+ ASR(%)
593
+ Baseline
594
+ 96.72
595
+ -
596
+ 94.77
597
+ -
598
+ BadNet
599
+ 96.64
600
+ 99.47
601
+ 94.69
602
+ 97.78
603
+ SIG
604
+ 96.70
605
+ 99.97
606
+ 94.77
607
+ 99.47
608
+ FTrojan
609
+ 96.25
610
+ 98.52
611
+ 94.21
612
+ 100.00
613
+ Frame Lag
614
+ 96.43
615
+ 99.97
616
+ 94.63
617
+ 97.96
618
+ Video Corruption
619
+ 96.54
620
+ 99.76
621
+ 95.08
622
+ 98.97
623
+ Motion Blur
624
+ 96.46
625
+ 99.55
626
+ 94.50
627
+ 99.39
628
+ Table 4.
629
+ Video Backdoor Attacks Against Different Archi-
630
+ tectures (UCF-101). When tested against network architectures
631
+ other than I3D such as TSM and SlowFast, both image and natural
632
+ backdoor attacks can still achieve high CDA and high ASR.
633
+ frame lag (lagging video), video compression glitch (which
634
+ we refer to as Video Corruption), and motion blur. Interest-
635
+ ingly, these attacks could achieve both high clean data ac-
636
+ curacy and high attack success rate. It is worth noting that
637
+ for frame lag, a two-frame lag is used for UCF-101 and a
638
+ three-frame lag is used for HMDB-51 and Kinetics-Sounds.
639
+ More details are provided in the Supplementary.
640
+ Attacks Against Different Architectures. So far, all at-
641
+ tacks have been experimented with against an I3D network.
642
+ To further explore the behavior of backdoor attacks against
643
+ other video recognition models, we test a subset of the con-
644
+ sidered attacks against a 2D based model, TSM, and another
645
+ 3D based model, SlowFast, on UCF-101. Table 4 shows
646
+ that all the aforementioned backdoor attacks perform sig-
647
+ nificantly well in terms of CDA and ASR against both TSM
648
+ and SlowFast architectures. Note that even though TSM is
649
+ a 2D based model, our proposed natural video backdoor at-
650
+ tacks still succeed in attacking it.
651
+ Recommendations for Video Backdoor Attacks. As men-
652
+ tioned in Section 3.2, the attacker must select a number of
653
+ frames to poison per video, keeping in mind that the video
654
+ will be sub-sampled and randomly cropped during evalua-
655
+ tion. Since the attacker is the one who trained the network in
656
+ the first place, he/she has access to the processing pipeline
657
+ and could exploit this during the attack. For example, if
658
+ video processing involves sub-sampling the video into clips
659
+ of 32 frames and cropping the frames into 224×224 crops,
660
+ the attacker could pass to the network an attacked video of
661
+ a temporal length of 32 frames and a spatial size 224×224,
662
+ Figure 4. Effect of the Number of Poisoned Frames (UCF-101).
663
+ Different colors refer to different number of frames poisoned dur-
664
+ ing the training of the attacked model. Training the model with
665
+ a single poisoned frame performs best for various choices of the
666
+ number of frames poisoned during evaluation.
667
+ Frame
668
+ Lag
669
+ Motion
670
+ Blur
671
+ SIG
672
+ BadNet
673
+ FTrojan
674
+ Elimination Rate(%)
675
+ 0.00
676
+ 0.00
677
+ 34.21
678
+ 33.77
679
+ 34.12
680
+ Sacrifice Rate(%)
681
+ 13.08
682
+ 12.82
683
+ 15.17
684
+ 14.25
685
+ 13.00
686
+ Table 5. Activation Clustering Defense (UCF-101). Whereas
687
+ Activation Clustering provides partial success in defending against
688
+ image backdoor attacks, it fails completely against natural attacks.
689
+ hence bypassing sub-sampling and cropping. However, a
690
+ system could force the user to input a video of a partic-
691
+ ular length, possibly greater than the length of the sub-
692
+ sampled clips. This raises an important question regarding
693
+ how many frames the attacker should poison. Clearly, the
694
+ smaller the number of frames the attacker poisons, the less
695
+ detectable the attack is, but does the attack remain effective?
696
+ In Figure 4, we show the attack success rate of backdoor-
697
+ attacked models trained on clips of 1, 8, 16, and 32 frames,
698
+ and a randomly sampled number of poisoned frames (out of
699
+ 32 total frames) when evaluated on clips of 1, 8, 16, and 32
700
+ poisoned frames (out of 32 total frames). Random refers to
701
+ training on a varying number of poisoned frames per clip.
702
+ Note that training the model against the worst-case scenario
703
+ (single frame), which mimics the case where only one of
704
+ the poisoned frames is sub-sampled, provides the best guar-
705
+ antees for achieving a high attack success rate.
706
+ Defenses Against Video Backdoor Attacks. We explore
707
+ the effect of extending some of the existing 2D backdoor
708
+ defenses against video backdoor attacks.
709
+ Optimization-
710
+ based defenses are extremely costly when extended to the
711
+ video domain. For example, Neural Cleanse (NC) [68], I-
712
+ BAU [83], and TABOR [23] involve a trigger reconstruc-
713
+ tion phase. The trigger space is now bigger in the presence
714
+ of the temporal dimension, and therefore, instead of opti-
715
+ mizing for a 224×224×3 trigger, the defender has to search
716
+ for a 32×224×224×3 trigger (assuming 32 frame clips are
717
+ used), which is both costly and hard to solve. The attacker
718
+ has the spatial and temporal dimensions to design and em-
719
+
720
+ BadNet
721
+ SIG
722
+ 100
723
+ 75
724
+ # Poisoned Frames
725
+ ASR(%)
726
+ (Training)
727
+ 1
728
+ 50
729
+ 8
730
+ 16
731
+ 25
732
+ 32
733
+ Random
734
+ 0
735
+ 8
736
+ 16
737
+ 32 1
738
+ 1
739
+ 8
740
+ 16
741
+ 32
742
+ # Poisoned Frames
743
+ (EvaluationFigure 5. STRIP Defense (UCF-101). Whereas the entropy of
744
+ image backdoor attacks is very low compared to that of clean sam-
745
+ ples, the proposed natural backdoor attacks have a natural distri-
746
+ bution of entropies similar to that of clean samples.
747
+ Figure 6. Pruning Defense (Kinetics-Sounds). Pruning is com-
748
+ pletely ineffective against image backdoor attacks extended to the
749
+ video domain and natural video backdoor attacks. Even though
750
+ the clean accuracy has dropped to random, the attack success rate
751
+ is maintained at very high levels.
752
+ bed their attack in, and, therefore, reverse engineering the
753
+ trigger is quite hard.
754
+ We consider three well-known defenses that introduce no
755
+ computational overhead when adopted to the video domain,
756
+ namely Activation Cluster (AC) [8], STRIP [20], and prun-
757
+ ing [42]. AC computes the activations of a neural network
758
+ on clean samples (from the test set) and an inspection set
759
+ of interest which may be poisoned. AC then applies PCA
760
+ to reduce the dimension of the activations, after which the
761
+ projected activations are clustered into two classes and com-
762
+ pared to the activations of the clean set. STRIP blends clean
763
+ samples with the samples of a possibly poisoned inspec-
764
+ tion set. The entropy of the predicted probabilities is then
765
+ checked for any abnormalities. Unlike clean samples, poi-
766
+ soned samples tend to have a low entropy. Pruning suggests
767
+ that the backdoor is usually embedded in particular neurons
768
+ Baseline
769
+ Sine Attack
770
+ High Frequency Attack
771
+ CDA(%)
772
+ 49.21
773
+ 47.21
774
+ 47.61
775
+ ASR(%)
776
+ -
777
+ 96.36
778
+ 95.96
779
+ Table 6. Audio Backdoor Attacks (Kinetics-Sounds). Both sine
780
+ attack and the high-frequency band attack perform similarly to
781
+ baseline in terms of CDA while being able to achieve high ASR.
782
+ in the network that are only activated in the presence of the
783
+ trigger. Therefore, those neurons are supposed to be dor-
784
+ mant as far as the test set samples, i.e. clean samples, are
785
+ concerned. This allows us to detect and prune those dor-
786
+ mant neurons to eliminate the backdoor. Table 5 shows the
787
+ elimination and sacrifice rates of AC when applied against
788
+ some of the considered attacks. The elimination rate refers
789
+ to the ratio of poisoned samples correctly detected as poi-
790
+ soned to the total number of poisoned samples, whereas the
791
+ sacrifice rate refers to the ratio of clean samples incorrectly
792
+ detected as poisoned to the total number of clean samples.
793
+ Whereas AC has partial success in defending against image
794
+ backdoor attacks, it fails completely against the proposed
795
+ natural backdoor attacks. Figure 5 shows that the entropy
796
+ of the clean and poisoned samples of the proposed natural
797
+ attacks is very similar and therefore could evade the STRIP
798
+ defense, while BadNet and FTrojan are detectable. Finally,
799
+ Figure 6 shows that pruning the least active neurons causes
800
+ a reduction in CDA without reducing ASR. This is observed
801
+ not only for the natural attacks, but also for the extended im-
802
+ age backdoor attacks, hinting that image backdoor defenses
803
+ are not effective in the video domain.
804
+ 4.3. Audio Backdoor Attacks
805
+ Attacks proposed against audio networks have been lim-
806
+ ited to adding a low-volume one-hot-spectrum noise in the
807
+ frequency domain, which leaves highly visible artifacts in
808
+ the spectrogram [85] or adding a human non-audible com-
809
+ ponent [33], f < 20Hz or f > 20kHz, which is non-
810
+ realistic, since spectrograms usually filter out those frequen-
811
+ cies. We consider two attacks against the Kinetics-Sounds
812
+ dataset; the first is to add a low-amplitude sine wave com-
813
+ ponent with f = 800Hz to the audio signal, and the second
814
+ is to add band-limited noise 5kHz < f < 6kHz. The spec-
815
+ trograms and the absolute difference between the attacked
816
+ spectrograms and the clean spectrogram are shown in Fig-
817
+ ure 7. Since no clear artifacts are observed in the spectro-
818
+ grams, human inspection fails to label the spectrograms as
819
+ attacked. The CDA and ASR rates of the backdoor-attacked
820
+ models for both attacks are shown in Table 6. The attacks
821
+ achieve a relatively high ASR.
822
+ 4.4. Audiovisual Backdoor Attacks
823
+ Now, we combine video and audio attacks to build a
824
+ multi-modal audiovisual backdoor attack. The way we do
825
+
826
+ BadNet
827
+ FTrojan
828
+ 1500
829
+ 1500
830
+ Poisoned
831
+ Clean
832
+ 1000
833
+ 1000
834
+ 500
835
+ 500
836
+ 0
837
+ 0
838
+ 2
839
+ 4
840
+ 0
841
+ 2
842
+ 4
843
+ Frame Lag
844
+ Motion Blur
845
+ 600
846
+ 600
847
+ 400
848
+ 400
849
+ 200
850
+ 200
851
+ 0
852
+ 0
853
+ 0
854
+ 2
855
+ 4
856
+ 0
857
+ 2
858
+ 4
859
+ EntropyBadNet
860
+ Frame Lag
861
+ 100
862
+ ASR
863
+ ASR
864
+ CDA
865
+ CDA
866
+ Accuracy(%)
867
+ 75
868
+ 50
869
+ 25
870
+ 0
871
+ 0
872
+ 25
873
+ 50
874
+ 75
875
+ 100
876
+ 0
877
+ 25
878
+ 50
879
+ 75
880
+ 100
881
+ Percentage Pruned (%Late Fusion
882
+ Early Fusion
883
+ Clean Audio
884
+ Sine Attack
885
+ High Freq. Attack
886
+ Clean Audio
887
+ Sine Attack
888
+ High Freq. Attack
889
+ Clean Video
890
+ 80.25 / -
891
+ 81.74 / 70.98
892
+ 80.96 / 77.91
893
+ 84.72 / -
894
+ 83.48 / 92.23
895
+ 83.94 / 93.72
896
+ BadNet
897
+ 77.33 / 66.97
898
+ 78.63 / 99.74
899
+ 77.33 / 99.87
900
+ 87.50 / 99.29
901
+ 85.10 / 99.87
902
+ 85.75 / 100.00
903
+ Blend
904
+ 79.60 / 75.06
905
+ 80.76 / 99.68
906
+ 79.08 / 99.61
907
+ 86.08 / 98.19
908
+ 83.55 / 99.81
909
+ 85.43 / 99.87
910
+ SIG
911
+ 78.50 / 68.33
912
+ 80.12 / 99.87
913
+ 79.02 / 100.00
914
+ 86.92 / 99.81
915
+ 84.97 / 100.00
916
+ 85.95 / 100.00
917
+ WaNet
918
+ 77.66 / 68.39
919
+ 79.79 / 99.94
920
+ 79.02 / 99.94
921
+ 86.46 / 98.96
922
+ 84.97 / 100.00
923
+ 85.88 / 100.00
924
+ FTrojan
925
+ 79.66 / 67.16
926
+ 80.76 / 99.48
927
+ 79.99 / 99.29
928
+ 86.08 / 98.58
929
+ 84.65 / 99.94
930
+ 85.49 / 100.00
931
+ Frame Lag
932
+ 79.08 / 63.41
933
+ 80.57 / 99.74
934
+ 79.47 / 99.87
935
+ 86.08 / 98.19
936
+ 84.59 / 99.94
937
+ 84.65 / 100.00
938
+ Video Corruption
939
+ 78.11 / 64.57
940
+ 78.24 / 99.68
941
+ 77.66 / 99.94
942
+ 86.59 / 99.29
943
+ 84.59 / 100.00
944
+ 85.43 / 100.00
945
+ Motion Blur
946
+ 79.79 / 69.24
947
+ 80.70 / 99.68
948
+ 79.86 / 99.94
949
+ 86.40 / 98.58
950
+ 84.65 / 100.00
951
+ 85.62 / 100.00
952
+ Table 7. Audiovisual Backdoor Attacks (Kinetics-Sounds). The entries in the table report the CDA(%)/ASR(%) of attacking late and
953
+ early fused audiovisual networks. When a single modality is attacked, late fusion has a low ASR compared to early fusion. When both
954
+ modalities are attacked, the ASR of both late and early fusion are high.
955
+ Figure 7. Clean and Attacked Audio Spectrograms. The uti-
956
+ lized audio backdoor attacks are not only audibly imperceptible
957
+ but also leave no perceptible artifacts in the Mel spectrogram. The
958
+ spectrogram of each attack is followed by the absolute difference
959
+ of the attacked spectrogram with the clean one.
960
+ it is by taking our attacked models from Sections 4.2 and
961
+ 4.3 and applying early or late fusion. For early fusion, we
962
+ extract video and audio features using our trained audio and
963
+ video backbones, and we then train a classifier on the con-
964
+ catenation of the features. In late fusion, the video and au-
965
+ dio networks predict independently on the input, and then
966
+ the individual logits are aggregated to produce the final pre-
967
+ diction. To answer the three questions posed in Section 3.3,
968
+ we run experiments in which both modalities are attacked
969
+ and others in which only a single modality is attacked for
970
+ both early and late fusion setups (Table 7). We summarize
971
+ the results as follows. (1) Attacking two modalities con-
972
+ sistently improves ASR and even CDA in some cases. (2)
973
+ Attacking a single modality is good enough to achieve a
974
+ high ASR in the case of early fusion but not late fusion.
975
+ (3) Early fusion enables the best of both worlds for the at-
976
+ tacker, namely, a high CDA and an almost perfect ASR. On
977
+ the other hand, late fusion experiences some serious drops
978
+ in ASR in the unimodal attack setup. An interesting find-
979
+ ing in these experiments is the following: if the outsourcer
980
+ has the option to outsource the most expensive modality,
981
+ training wise, while training other modalities in-house, ap-
982
+ plying late fusion could be used as a defense mechanism,
983
+ especially in the presence of more clean modalities.
984
+ 5. Conclusion
985
+ Backdoor attacks present a serious and exploitable vul-
986
+ nerability against both unimodal and multi-modal video
987
+ action recognition models. We showed how existing im-
988
+ age backdoor attacks could be extended either statically
989
+ or dynamically to develop powerful backdoor attacks that
990
+ achieve both a high clean data accuracy and a high attack
991
+ success rate. Besides existing image backdoor attacks, there
992
+ exists a set of natural video backdoor attacks, such as mo-
993
+ tion blur and frame lag, that are resilient to existing image
994
+ backdoor defenses. Given that videos are usually accom-
995
+ panied by audio, we showed two ways in which one could
996
+ attack audio classifiers in a human inaudible manner. The
997
+
998
+ Clean Spectrogram
999
+ 80
1000
+ 40
1001
+ 0
1002
+ Sine Attack
1003
+ 80
1004
+ 40
1005
+ 0
1006
+ Mel Frequency
1007
+ 80
1008
+ 40
1009
+ 0
1010
+ High Frequency Attack
1011
+ 80
1012
+ 40
1013
+ 0
1014
+ 80
1015
+ 40
1016
+ 0
1017
+ 0
1018
+ 50
1019
+ 100
1020
+ 150
1021
+ 200
1022
+ 250
1023
+ 300
1024
+ Frameattacked video and audio models are then used to train an
1025
+ audiovisual action recognition model by applying both early
1026
+ and late fusion. Different combinations of poisoned modal-
1027
+ ities are tested, concluding that: (1) poisoning two modal-
1028
+ ities could achieve extremely high attack success rates in
1029
+ both late and early fusion settings, and (2) if a single modal-
1030
+ ity is poisoned, unlike early fusion, late fusion could reduce
1031
+ the effectiveness of the backdoor. We hope that our work
1032
+ reignites the attention of the community towards exploring
1033
+ backdoor attacks and defenses in the video domain.
1034
+ References
1035
+ [1] Humam Alwassel, Dhruv Kumar Mahajan, Lorenzo Tor-
1036
+ resani, Bernard Ghanem, and Du Tran.
1037
+ Self-supervised
1038
+ learning by cross-modal audio-video clustering.
1039
+ ArXiv,
1040
+ abs/1911.12667, 2020. 2
1041
+ [2] Relja Arandjelovic and Andrew Zisserman. Look, listen and
1042
+ learn. In Proceedings of the IEEE International Conference
1043
+ on Computer Vision, pages 609–617, 2017. 2, 4
1044
+ [3] Relja Arandjelovic and Andrew Zisserman.
1045
+ Objects that
1046
+ sound. In Proceedings of the European conference on com-
1047
+ puter vision (ECCV), pages 435–451, 2018. 2
1048
+ [4] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen
1049
+ Sun, Mario Luˇci´c, and Cordelia Schmid.
1050
+ Vivit: A video
1051
+ vision transformer. In Proceedings of the IEEE/CVF Inter-
1052
+ national Conference on Computer Vision, pages 6836–6846,
1053
+ 2021. 2
1054
+ [5] Mauro Barni, Kassem Kallas, and Benedetta Tondi. A new
1055
+ backdoor attack in cnns by training set corruption without la-
1056
+ bel poisoning. 2019 IEEE International Conference on Im-
1057
+ age Processing (ICIP), pages 101–105, 2019. 1, 4
1058
+ [6] Gedas Bertasius, Heng Wang, and Lorenzo Torresani.
1059
+ Is
1060
+ space-time attention all you need for video understanding?
1061
+ In ICML, 2021. 2
1062
+ [7] Joao Carreira and Andrew Zisserman.
1063
+ Quo vadis, action
1064
+ recognition? a new model and the kinetics dataset. In pro-
1065
+ ceedings of the IEEE Conference on Computer Vision and
1066
+ Pattern Recognition, pages 6299–6308, 2017. 2, 4
1067
+ [8] Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko
1068
+ Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and
1069
+ Biplav Srivastava.
1070
+ Detecting backdoor attacks on deep
1071
+ neural networks by activation clustering.
1072
+ arXiv preprint
1073
+ arXiv:1811.03728, 2018. 2, 5, 7
1074
+ [9] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn
1075
+ Song. Targeted backdoor attacks on deep learning systems
1076
+ using data poisoning.
1077
+ arXiv preprint arXiv:1712.05526,
1078
+ 2017. 2, 4
1079
+ [10] Xuan Chen, Yuena Ma, and Shiwei Lu. Use procedural noise
1080
+ to achieve backdoor attack. IEEE Access, 9:127204–127216,
1081
+ 2021. 2
1082
+ [11] MMAction2 Contributors.
1083
+ Openmmlab’s next generation
1084
+ video understanding toolbox and benchmark. https://
1085
+ github.com/open-mmlab/mmaction2, 2020. 5
1086
+ [12] Bao Gia Doan, Ehsan Abbasnejad, and Damith C Ranas-
1087
+ inghe. Februus: Input purification defense against trojan at-
1088
+ tacks on deep neural network systems. In Annual Computer
1089
+ Security Applications Conference, pages 897–912, 2020. 2
1090
+ [13] Khoa D Doan and Yingjie Lao. Backdoor attack with im-
1091
+ perceptible input and latent modification. In NeurIPS, 2021.
1092
+ 2
1093
+ [14] Khoa D Doan, Yingjie Lao, Weijie Zhao, and Ping Li. Lira:
1094
+ Learnable, imperceptible and robust backdoor attacks. 2021
1095
+ IEEE/CVF International Conference on Computer Vision
1096
+ (ICCV), pages 11946–11956, 2021. 2
1097
+ [15] Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao
1098
+ Xiao, Hang Su, and Jun Zhu. Black-box detection of back-
1099
+ door attacks with limited information and data. In Proceed-
1100
+ ings of the IEEE/CVF International Conference on Com-
1101
+ puter Vision, pages 16482–16491, 2021. 2
1102
+ [16] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li,
1103
+ Zhicheng Yan, Jitendra Malik, and Christoph Feichten-
1104
+ hofer.
1105
+ Multiscale vision transformers.
1106
+ In Proceedings of
1107
+ the IEEE/CVF International Conference on Computer Vi-
1108
+ sion, pages 6824–6835, 2021. 2
1109
+ [17] Christoph Feichtenhofer. X3d: Expanding architectures for
1110
+ efficient video recognition. 2020 IEEE/CVF Conference on
1111
+ Computer Vision and Pattern Recognition (CVPR), pages
1112
+ 200–210, 2020. 2
1113
+ [18] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and
1114
+ Kaiming He. Slowfast networks for video recognition. In
1115
+ Proceedings of the IEEE/CVF international conference on
1116
+ computer vision, pages 6202–6211, 2019. 2, 4
1117
+ [19] Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong
1118
+ Xia, and Dacheng Tao.
1119
+ Fiba: Frequency-injection based
1120
+ backdoor attack in medical image analysis. 2022 IEEE/CVF
1121
+ Conference on Computer Vision and Pattern Recognition
1122
+ (CVPR), pages 20844–20853, 2022. 2
1123
+ [20] Yansong Gao, Change Xu, Derui Wang, Shiping Chen,
1124
+ Damith C Ranasinghe, and Surya Nepal. Strip: A defence
1125
+ against trojan attacks on deep neural networks.
1126
+ In Pro-
1127
+ ceedings of the 35th Annual Computer Security Applications
1128
+ Conference, pages 113–125, 2019. 2, 5, 7
1129
+ [21] Bernard Ghanem,
1130
+ Juan Carlos Niebles,
1131
+ Cees Snoek,
1132
+ Fabian Caba Heilbron, Humam Alwassel, Victor Escorcia,
1133
+ Ranjay Krishna, Shyamal Buch, and Cuong Duc Dao. The
1134
+ activitynet large-scale activity recognition challenge 2018
1135
+ summary. arXiv preprint arXiv:1808.03766, 2018. 3
1136
+ [22] Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth
1137
+ Garg. Badnets: Evaluating backdooring attacks on deep neu-
1138
+ ral networks. IEEE Access, 7:47230–47244, 2019. 1, 2, 3,
1139
+ 4
1140
+ [23] Wenbo Guo, Lun Wang, Xinyu Xing, Min Du, and Dawn
1141
+ Song. Tabor: A highly accurate approach to inspecting and
1142
+ restoring trojan backdoors in ai systems.
1143
+ arXiv preprint
1144
+ arXiv:1908.01763, 2019. 2, 6
1145
+ [24] Wenbo Guo, Lun Wang, Yan Xu, Xinyu Xing, Min Du, and
1146
+ Dawn Song. Towards inspecting and eliminating trojan back-
1147
+ doors in deep neural networks. In 2020 IEEE International
1148
+ Conference on Data Mining (ICDM), pages 162–171. IEEE,
1149
+ 2020. 2
1150
+ [25] Hasan Hammoud and Bernard Ghanem. Check your other
1151
+ door! establishing backdoor attacks in the frequency domain.
1152
+ ArXiv, abs/2109.05507, 2021. 1, 2, 4
1153
+
1154
+ [26] Jonathan Hayase, Weihao Kong, Raghav Somani, and Se-
1155
+ woong Oh. Spectre: defending against backdoor attacks us-
1156
+ ing robust statistics. arXiv preprint arXiv:2104.11315, 2021.
1157
+ 2
1158
+ [27] Di Hu, Feiping Nie, and Xuelong Li.
1159
+ Deep multimodal
1160
+ clustering for unsupervised audiovisual learning.
1161
+ 2019
1162
+ IEEE/CVF Conference on Computer Vision and Pattern
1163
+ Recognition (CVPR), pages 9240–9249, 2019. 2
1164
+ [28] Di Hu, Zongge Wang, Haoyi Xiong, Dong Wang, Feip-
1165
+ ing Nie, and Dejing Dou. Curriculum audiovisual learning.
1166
+ ArXiv, abs/2001.09414, 2020. 2
1167
+ [29] Xiaoling Hu, Xiao Lin, Michael Cogswell, Yi Yao, Susmit
1168
+ Jha, and Chao Chen. Trigger hunting with a topological prior
1169
+ for trojan detection. arXiv preprint arXiv:2110.08335, 2021.
1170
+ 2
1171
+ [30] Mojan Javaheripi, Mohammad Samragh, Gregory Fields,
1172
+ Tara Javidi, and Farinaz Koushanfar.
1173
+ Cleann: Acceler-
1174
+ ated trojan shield for embedded neural networks. In 2020
1175
+ IEEE/ACM International Conference On Computer Aided
1176
+ Design (ICCAD), pages 1–9. IEEE, 2020. 2
1177
+ [31] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang,
1178
+ Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola,
1179
+ Tim Green, Trevor Back, Paul Natsev, et al. The kinetics hu-
1180
+ man action video dataset. arXiv preprint arXiv:1705.06950,
1181
+ 2017. 4
1182
+ [32] Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, and
1183
+ Dima Damen.
1184
+ Epic-fusion: Audio-visual temporal bind-
1185
+ ing for egocentric action recognition. In Proceedings of the
1186
+ IEEE/CVF International Conference on Computer Vision,
1187
+ pages 5492–5501, 2019. 3
1188
+ [33] Stefanos Koffas, Jing Xu, Mauro Conti, and Stjepan Picek.
1189
+ Can you hear it?: Backdoor attacks via ultrasonic triggers.
1190
+ Proceedings of the 2022 ACM Workshop on Wireless Secu-
1191
+ rity and Machine Learning, 2022. 7
1192
+ [34] Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, and
1193
+ Heiko Hoffmann. Universal litmus patterns: Revealing back-
1194
+ door attacks in cnns. In Proceedings of the IEEE/CVF Con-
1195
+ ference on Computer Vision and Pattern Recognition, pages
1196
+ 301–310, 2020. 2
1197
+ [35] Bruno Korbar, Du Tran, and Lorenzo Torresani. Coopera-
1198
+ tive learning of audio and video models from self-supervised
1199
+ synchronization. Advances in Neural Information Process-
1200
+ ing Systems, 31, 2018. 2
1201
+ [36] Hildegard Kuehne, Hueihan Jhuang, Est´ıbaliz Garrote,
1202
+ Tomaso Poggio, and Thomas Serre. Hmdb: a large video
1203
+ database for human motion recognition.
1204
+ In 2011 Inter-
1205
+ national conference on computer vision, pages 2556–2563.
1206
+ IEEE, 2011. 4
1207
+ [37] Yuezun Li, Y. Li, Baoyuan Wu, Longkang Li, Ran He, and
1208
+ Siwei Lyu. Invisible backdoor attack with sample-specific
1209
+ triggers. 2021 IEEE/CVF International Conference on Com-
1210
+ puter Vision (ICCV), pages 16443–16452, 2021. 2
1211
+ [38] Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo
1212
+ Li, and Xingjun Ma. Neural attention distillation: Erasing
1213
+ backdoor triggers from deep neural networks. arXiv preprint
1214
+ arXiv:2101.05930, 2021. 2
1215
+ [39] Yiming Li, Baoyuan Wu, Yong Jiang, Zhifeng Li, and Shutao
1216
+ Xia. Backdoor learning: A survey. IEEE transactions on
1217
+ neural networks and learning systems, PP, 2022. 1, 3
1218
+ [40] Cong Liao, Haoti Zhong, Anna Cinzia Squicciarini, Sencun
1219
+ Zhu, and David J. Miller. Backdoor embedding in convolu-
1220
+ tional neural network models via invisible perturbation. Pro-
1221
+ ceedings of the Tenth ACM Conference on Data and Appli-
1222
+ cation Security and Privacy, 2020. 2
1223
+ [41] Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift
1224
+ module for efficient video understanding.
1225
+ In Proceedings
1226
+ of the IEEE/CVF International Conference on Computer Vi-
1227
+ sion, pages 7083–7093, 2019. 2, 4
1228
+ [42] Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-
1229
+ pruning: Defending against backdooring attacks on deep
1230
+ neural networks. In International Symposium on Research
1231
+ in Attacks, Intrusions, and Defenses, pages 273–294, 2018.
1232
+ 2, 5, 7
1233
+ [43] Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma,
1234
+ Yousra Aafer, and Xiangyu Zhang.
1235
+ Abs: Scanning neu-
1236
+ ral networks for back-doors by artificial brain stimulation.
1237
+ In Proceedings of the 2019 ACM SIGSAC Conference on
1238
+ Computer and Communications Security, pages 1265–1282,
1239
+ 2019. 2
1240
+ [44] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee,
1241
+ Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning
1242
+ attack on neural networks. 2017. 2
1243
+ [45] Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. Re-
1244
+ flection backdoor: A natural backdoor attack on deep neural
1245
+ networks. In ECCV, 2020. 2
1246
+ [46] Yingqi Liu, Guangyu Shen, Guanhong Tao, Zhenting Wang,
1247
+ Shiqing Ma, and Xiangyu Zhang. Complex backdoor detec-
1248
+ tion by symmetric feature differencing. In Proceedings of
1249
+ the IEEE/CVF Conference on Computer Vision and Pattern
1250
+ Recognition, pages 15003–15013, 2022. 2
1251
+ [47] Yuntao Liu, Yang Xie, and Ankur Srivastava. Neural trojans.
1252
+ In 2017 IEEE International Conference on Computer Design
1253
+ (ICCD), pages 45–48. IEEE, 2017. 2
1254
+ [48] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang,
1255
+ Stephen Lin, and Han Hu. Video swin transformer. In Pro-
1256
+ ceedings of the IEEE/CVF Conference on Computer Vision
1257
+ and Pattern Recognition, pages 3202–3211, 2022. 2
1258
+ [49] Chenxu Luo and Alan L Yuille. Grouped spatial-temporal
1259
+ aggregation for efficient action recognition. In Proceedings
1260
+ of the IEEE/CVF International Conference on Computer Vi-
1261
+ sion, pages 5512–5521, 2019. 2
1262
+ [50] Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif
1263
+ Abuadbba, Anmin Fu, Said F. Al-Sarawi, Surya Nepal, and
1264
+ Derek Abbott.
1265
+ Macab: Model-agnostic clean-annotation
1266
+ backdoor to object detection with natural trigger in real-
1267
+ world. ArXiv, abs/2209.02339, 2022. 5
1268
+ [51] Pedro Miguel Morgado, Ishan Misra, and Nuno Vascon-
1269
+ celos. Robust audio-visual instance discrimination. 2021
1270
+ IEEE/CVF Conference on Computer Vision and Pattern
1271
+ Recognition (CVPR), pages 12929–12940, 2021. 2
1272
+ [52] A. Nguyen and A. Tran. Wanet - imperceptible warping-
1273
+ based backdoor attack. ArXiv, abs/2102.10369, 2021. 2, 4
1274
+
1275
+ [53] Xiangyu Qi, Ting Xie, Saeed Mahloujifar, and Prateek Mit-
1276
+ tal. Circumventing backdoor defenses that are based on la-
1277
+ tent separability. ArXiv, abs/2205.13613, 2022. 2
1278
+ [54] Ximing Qiao, Yukun Yang, and Hai Li. Defending neural
1279
+ backdoors via generative distribution modeling. Advances in
1280
+ neural information processing systems, 32, 2019. 2
1281
+ [55] Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang
1282
+ Qiu, and Bhavani Thuraisingham.
1283
+ Deepsweep: An eval-
1284
+ uation framework for mitigating dnn backdoor attacks us-
1285
+ ing data augmentation.
1286
+ In Proceedings of the 2021 ACM
1287
+ Asia Conference on Computer and Communications Secu-
1288
+ rity, pages 363–377, 2021. 2
1289
+ [56] Yankun Ren, Longfei Li, and Jun Zhou. Simtrojan: Stealthy
1290
+ backdoor attack.
1291
+ 2021 IEEE International Conference on
1292
+ Image Processing (ICIP), pages 819–823, 2021. 2
1293
+ [57] A. Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang
1294
+ Zhang. Dynamic backdoor attacks against machine learning
1295
+ models. 2022 IEEE 7th European Symposium on Security
1296
+ and Privacy (EuroS&P), pages 703–718, 2022. 2
1297
+ [58] Guangyu Shen, Yingqi Liu, Guanhong Tao, Shengwei An,
1298
+ Qiuling Xu, Siyuan Cheng, Shiqing Ma, and Xiangyu Zhang.
1299
+ Backdoor scanning for deep neural networks through k-
1300
+ arm optimization. In International Conference on Machine
1301
+ Learning, pages 9525–9536. PMLR, 2021. 2
1302
+ [59] Xiaoyu Song, Hong Chen, Qing Wang, Yunqiang Chen,
1303
+ Mengxiao Tian, and Hui Tang. A review of audio-visual fu-
1304
+ sion with machine learning. Journal of Physics: Conference
1305
+ Series, 1237, 2019. 3
1306
+ [60] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah.
1307
+ Ucf101: A dataset of 101 human actions classes from videos
1308
+ in the wild. arXiv preprint arXiv:1212.0402, 2012. 4
1309
+ [61] Di Tang, XiaoFeng Wang, Haixu Tang, and Kehuan Zhang.
1310
+ Demon in the variant: Statistical analysis of {DNNs} for ro-
1311
+ bust backdoor contamination detection. In 30th USENIX Se-
1312
+ curity Symposium (USENIX Security 21), pages 1541–1558,
1313
+ 2021. 2
1314
+ [62] Guanhong Tao, Guangyu Shen, Yingqi Liu, Shengwei An,
1315
+ Qiuling Xu, Shiqing Ma, Pan Li, and Xiangyu Zhang. Better
1316
+ trigger inversion optimization in backdoor scanning. In Pro-
1317
+ ceedings of the IEEE/CVF Conference on Computer Vision
1318
+ and Pattern Recognition, pages 13368–13378, 2022. 2
1319
+ [63] Brandon Tran, Jerry Li, and Aleksander Madry. Spectral sig-
1320
+ natures in backdoor attacks. Advances in neural information
1321
+ processing systems, 31, 2018. 2
1322
+ [64] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani,
1323
+ and Manohar Paluri. Learning spatiotemporal features with
1324
+ 3d convolutional networks. In Proceedings of the IEEE inter-
1325
+ national conference on computer vision, pages 4489–4497,
1326
+ 2015. 2
1327
+ [65] Du Tran, Heng Wang, Lorenzo Torresani, and Matt Feis-
1328
+ zli.
1329
+ Video classification with channel-separated convolu-
1330
+ tional networks. 2019 IEEE/CVF International Conference
1331
+ on Computer Vision (ICCV), pages 5551–5560, 2019. 2
1332
+ [66] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann
1333
+ LeCun, and Manohar Paluri. A closer look at spatiotemporal
1334
+ convolutions for action recognition. In Proceedings of the
1335
+ IEEE conference on Computer Vision and Pattern Recogni-
1336
+ tion, pages 6450–6459, 2018. 2
1337
+ [67] Alexander Turner, Dimitris Tsipras, and Aleksander Madry.
1338
+ Label-consistent backdoor attacks. ArXiv, abs/1912.02771,
1339
+ 2019. 2
1340
+ [68] Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bi-
1341
+ mal Viswanath, Haitao Zheng, and Ben Y Zhao.
1342
+ Neural
1343
+ cleanse: Identifying and mitigating backdoor attacks in neu-
1344
+ ral networks. In 2019 IEEE Symposium on Security and Pri-
1345
+ vacy (SP), pages 707–723. IEEE, 2019. 2, 6
1346
+ [69] Limin Wang, Zhan Tong, Bin Ji, and Gangshan Wu. Tdn:
1347
+ Temporal difference networks for efficient action recogni-
1348
+ tion. In Proceedings of the IEEE/CVF Conference on Com-
1349
+ puter Vision and Pattern Recognition, pages 1895–1904,
1350
+ 2021. 2
1351
+ [70] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua
1352
+ Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment net-
1353
+ works: Towards good practices for deep action recognition.
1354
+ In European conference on computer vision, pages 20–36.
1355
+ Springer, 2016. 2
1356
+ [71] Tong Wang, Yuan Yao, Feng Xu, Shengwei An, Hanghang
1357
+ Tong, and Ting Wang. Backdoor attack through frequency
1358
+ domain. ArXiv, abs/2111.10991, 2021. 2, 4
1359
+ [72] Zhenting Wang, Juan Zhai, and Shiqing Ma.
1360
+ Bppattack:
1361
+ Stealthy and efficient trojan attacks against deep neural net-
1362
+ works via image quantization and contrastive adversarial
1363
+ learning. 2022 IEEE/CVF Conference on Computer Vision
1364
+ and Pattern Recognition (CVPR), pages 15054–15063, 2022.
1365
+ 2
1366
+ [73] Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji,
1367
+ Josephine
1368
+ Passananti,
1369
+ Emilio
1370
+ Andere,
1371
+ Haitao
1372
+ Zheng,
1373
+ and Ben Zhao.
1374
+ Natural backdoor datasets.
1375
+ ArXiv,
1376
+ abs/2206.10673, 2022. 5
1377
+ [74] Dongxian Wu and Yisen Wang. Adversarial neuron prun-
1378
+ ing purifies backdoored deep models. Advances in Neural
1379
+ Information Processing Systems, 34:16913–16925, 2021. 2
1380
+ [75] Pengfei Xia, Hongjing Niu, Ziqiang Li, and Bin Li. Enhanc-
1381
+ ing backdoor attacks with multi-level mmd regularization.
1382
+ IEEE Transactions on Dependable and Secure Computing,
1383
+ 2022. 2
1384
+ [76] Zhen Xiang, David J Miller, and George Kesidis.
1385
+ Post-
1386
+ training detection of backdoor attacks for two-class and
1387
+ multi-attack scenarios.
1388
+ arXiv preprint arXiv:2201.08474,
1389
+ 2022. 2
1390
+ [77] Fanyi Xiao, Yong Jae Lee, Kristen Grauman, Jitendra Malik,
1391
+ and Christoph Feichtenhofer. Audiovisual slowfast networks
1392
+ for video recognition.
1393
+ arXiv preprint arXiv:2001.08740,
1394
+ 2020. 2, 3
1395
+ [78] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and
1396
+ Kevin Murphy.
1397
+ Rethinking spatiotemporal feature learn-
1398
+ ing: Speed-accuracy trade-offs in video classification.
1399
+ In
1400
+ Proceedings of the European conference on computer vision
1401
+ (ECCV), pages 305–321, 2018. 2
1402
+ [79] Mingfu Xue, Can He, Shichang Sun, Jian Wang, and
1403
+ Weiqiang Liu. Robust backdoor attacks against deep neural
1404
+ networks in real physical world. 2021 IEEE 20th Interna-
1405
+ tional Conference on Trust, Security and Privacy in Comput-
1406
+ ing and Communications (TrustCom), pages 620–626, 2021.
1407
+ 5
1408
+
1409
+ [80] Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y. Zhao.
1410
+ Latent backdoor attacks on deep neural networks. Proceed-
1411
+ ings of the 2019 ACM SIGSAC Conference on Computer and
1412
+ Communications Security, 2019. 2
1413
+ [81] Dong Yin,
1414
+ Raphael Gontijo Lopes,
1415
+ Jonathon Shlens,
1416
+ Ekin Dogus Cubuk, and Justin Gilmer. A fourier perspec-
1417
+ tive on model robustness in computer vision. In NeurIPS,
1418
+ 2019. 2
1419
+ [82] Chang Yue, Peizhuo Lv, Ruigang Liang, and Kai Chen. In-
1420
+ visible backdoor attacks using data poisoning in the fre-
1421
+ quency domain. ArXiv, abs/2207.04209, 2022. 2
1422
+ [83] Yi Zeng, Si Chen, Won Park, Z Morley Mao, Ming Jin, and
1423
+ Ruoxi Jia. Adversarial unlearning of backdoors via implicit
1424
+ hypergradient. arXiv preprint arXiv:2110.03735, 2021. 2, 6
1425
+ [84] Yi Zeng, Won Park, Zhuoqing Morley Mao, and R. Jia. Re-
1426
+ thinking the backdoor attacks’ triggers: A frequency per-
1427
+ spective. 2021 IEEE/CVF International Conference on Com-
1428
+ puter Vision (ICCV), pages 16453–16461, 2021. 2
1429
+ [85] Tongqing Zhai, Yiming Li, Zi-Mou Zhang, Baoyuan Wu,
1430
+ Yong Jiang, and Shutao Xia.
1431
+ Backdoor attack against
1432
+ speaker verification.
1433
+ ICASSP 2021 - 2021 IEEE Interna-
1434
+ tional Conference on Acoustics, Speech and Signal Process-
1435
+ ing (ICASSP), pages 2560–2564, 2021. 7
1436
+ [86] Feng Zhao, Li Zhou, Qi Zhong, Rushi Lan, and Leo Yu
1437
+ Zhang. Natural backdoor attacks on deep neural networks
1438
+ via raindrops. Security and Communication Networks, 2022.
1439
+ 5
1440
+ [87] Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey,
1441
+ Jingjing Chen, and Yu-Gang Jiang.
1442
+ Clean-label backdoor
1443
+ attacks on video recognition models.
1444
+ In Proceedings of
1445
+ the IEEE/CVF Conference on Computer Vision and Pattern
1446
+ Recognition, pages 14443–14452, 2020. 1, 4, 5
1447
+ [88] Zhendong Zhao, Xiaojun Chen, Yu Xuan, Ye Dong, Dakui
1448
+ Wang, and Kaitai Liang. Defeat: Deep hidden feature back-
1449
+ door attacks by imperceptible perturbation and latent repre-
1450
+ sentation constraints. 2022 IEEE/CVF Conference on Com-
1451
+ puter Vision and Pattern Recognition (CVPR), pages 15192–
1452
+ 15201, 2022. 2
1453
+ [89] Runkai Zheng, Rongjun Tang, Jianze Li, and Li Liu. Data-
1454
+ free backdoor removal based on channel lipschitzness. In
1455
+ European Conference on Computer Vision, pages 175–191.
1456
+ Springer, 2022. 2
1457
+ [90] Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank
1458
+ Goswami, and Chao Chen. Topological detection of trojaned
1459
+ neural networks. Advances in Neural Information Process-
1460
+ ing Systems, 34:17258–17272, 2021. 2
1461
+ [91] Nan Zhong, Zhenxing Qian, and Xinpeng Zhang. Impercep-
1462
+ tible backdoor attack: From input space to feature represen-
1463
+ tation. In IJCAI, 2022. 2
1464
+
39AzT4oBgHgl3EQfD_qi/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
3NFQT4oBgHgl3EQfGjVY/content/tmp_files/2301.13245v1.pdf.txt ADDED
@@ -0,0 +1,1622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Safety Framework for Flow Decomposition Problems via
2
+ Integer Linear Programming
3
+ Fernando H. C. Dias1,⋆[0000−0002−6398−919X], Manuel C´aceres1,⋆[0000−0003−0235−6951], Lucia
4
+ Williams2,⋆[0000−0003−3785−0247], Brendan Mumey2,⋆⋆[0000−0001−7151−2124], and
5
+ Alexandru I. Tomescu1,⋆⋆[0000−0002−5747−8350]
6
+ 1 Department of Computer Science, University of Helsinki, Finland
7
+ {fernando.cunhadias,manuel.caceres,alexandru.tomescu}@helsinki.fi
8
+ 2 School of Computing, Montana State University, Bozeman, MT, USA
9
+ {lucia.williams,brendan.mumey}@montana.edu
10
+ Abstract. Many important problems in Bioinformatics (e.g., assembly or multi-assembly) admit mul-
11
+ tiple solutions, while the final objective is to report only one. A common approach to deal with this
12
+ uncertainty is finding safe partial solutions (e.g., contigs) which are common to all solutions. Previous
13
+ research on safety has focused on polynomially-time solvable problems, whereas many successful and
14
+ natural models are NP-hard to solve, leaving a lack of “safety tools” for such problems. We propose
15
+ the first method for computing all safe solutions for an NP-hard problem, minimum flow decomposi-
16
+ tion. We obtain our results by developing a “safety test” for paths based on a general Integer Linear
17
+ Programming (ILP) formulation. Moreover, we provide implementations with practical optimizations
18
+ aimed to reduce the total ILP time, the most efficient of these being based on a recursive group-testing
19
+ procedure.
20
+ Results: Experimental results on the transcriptome datasets of Shao and Kingsford (TCBB, 2017)
21
+ show that all safe paths for minimum flow decompositions correctly recover up to 90% of the full RNA
22
+ transcripts, which is at least 25% more than previously known safe paths, such as (C´aceres et al. TCBB,
23
+ 2021), (Zheng et al., RECOMB 2021), (Khan et al., RECOMB 2022, ESA 2022). Moreover, despite the
24
+ NP-hardness of the problem, we can report all safe paths for 99.8% of the over 27,000 non-trivial graphs
25
+ of this dataset in only 1.5 hours. Our results suggest that, on perfect data, there is less ambiguity than
26
+ thought in the notoriously hard RNA assembly problem.
27
+ Availability: https://github.com/algbio/mfd-safety
28
+ Contact: alexandru.tomescu@helsinki.fi
29
+ Keywords: RNA assembly · Network flow · Flow decomposition · Integer linear programming · Safety
30
+ ⋆ Shared first-author contribution
31
+ ⋆⋆ Shared last-author contribution
32
+ arXiv:2301.13245v1 [cs.DS] 30 Jan 2023
33
+
34
+ 1
35
+ Introduction
36
+ In real-world scenarios where an unknown object needs to be discovered from the input data, we would like
37
+ to formulate a computational problem loosely enough so that the unknown object is indeed a solution to
38
+ the problem, but also tightly enough so that the problem does not admit many other solutions. However,
39
+ this goal is difficult in practice, and indeed, various commonly used problem formulations in Bioinformatics
40
+ still admit many solutions. While a naive approach is to just exhaustively enumerate all these solutions, a
41
+ more practical approach is to report only those sub-solutions (or partial solutions) that are common to all
42
+ solutions to the problem.
43
+ In the graph theory community such sub-solutions have been called persistent [14,21], and in the Bioin-
44
+ formatics community reliable [54], or more recently, safe [51]. The study of safe sub-solutions started in
45
+ Bioinformatics in the 1990’s [54,11,37] with those amino-acid pairs that are common to all optimal and
46
+ suboptimal alignments of two protein sequences.
47
+ In the genome assembly community, the notion of contig, namely a string that is guaranteed to appear in
48
+ any possible assembly of the reads, is at the core of most genome assemblers. This approach originated in 1995
49
+ with the notion of unitigs [25] (non-branching paths in an assembly graph), which were progressively [42,6]
50
+ generalized to paths made up of a prefix of nodes with in-degree one followed by nodes with out-degree
51
+ one [35,24,29] (also called extended unitigs, or Y-to-V contigs).
52
+ Later, [51] formalized all such types of contigs as those safe strings that appear in all solutions to a
53
+ genome assembly problem formulation, expressed as a certain type of walk in a graph. [10,9] proposed more
54
+ efficient and unifying safety algorithms for several types of graph walks. [45] recently studied the safety of
55
+ contigs produced by state-of-the-art genome assemblers on real data.
56
+ Analogous studies were recently made also for multi-assembly problems, where several related genomic
57
+ sequences need to be assembled from a sample of mixed reads. [8] studied safe paths that appear in all
58
+ constrained path covers of a directed acyclic graph (DAG). Zheng, Ma and Kingsford studied the more
59
+ practical setting of a network flow in a DAG by finding those paths that appear in any flow decomposition
60
+ of the given network flow, under a probabilistic framework [34], or a combinatorial framework [58].3 [27]
61
+ presented a simple characterization of safe paths appearing in any flow decomposition of a given acyclic
62
+ network flow, leading to a more efficient algorithm than the one of [58], and further optimized by [28].
63
+ Motivation. Despite the significant progress in obtaining safe algorithms for a range of different appli-
64
+ cations, current safe algorithms are limited to problems where computing a solution itself is achievable in
65
+ polynomial time. However, many natural problems are NP-hard, and safe algorithms for such problems are
66
+ fully missing. Apart from the theoretical interest, usually such NP-hard problems correspond to restrictions
67
+ of easier (polynomially-computable) problems, and thus by definition, also have longer safe sub-solutions.
68
+ As such, current safety algorithms miss data that could be reported as correct, just because they do not
69
+ constrain the solution space enough. A major reason for this lack of progress is that if a problem is NP-hard,
70
+ then its safety version is likely to be hard too. This phenomenon can be found both in classically studied NP-
71
+ hard problems — for example, computing nodes present in all maximum independent sets of an undirected
72
+ graph is NP-hard [21] — as well as in NP-hard problems studied for their application to Bioinformatics, as
73
+ we discuss further in the appendix.
74
+ We introduce our results by focusing on the flow decomposition problem. This is a classical model at the
75
+ core of multi-assembly software for RNA transcripts [33,31,5,50] and viral quasi-species genomes [3,2,44,12],
76
+ and also a standard problem with applications in other fields, such as networking [36,22,13,23] or transporta-
77
+ tion [39,38]. In its most basic optimization form, minimum flow decomposition (MFD), we are given a flow
78
+ in a graph, and we need to decompose it into a minimum number of paths with associated weights, such
79
+ that the superposition of these weighted paths gives the original flow. This is an NP-hard problem, even
80
+ when restricted to DAGs [53,22]. Various approaches have been proposed to tackle the problem, including
81
+ fixed-parameter tractable algorithms [30], approximation algorithms [36,7] and Integer Linear Programming
82
+ formulations [15,46].
83
+ 3 The problem AND-Quant from [58] actually handles a more general version of this problem.
84
+ 1
85
+
86
+ In Bioinformatics applications, reads or contigs originating from a mixed sample of genomic sequences
87
+ with different abundances are aligned to a reference. A graph model, such as a splice graph or a variation
88
+ graph, is built from these alignments. Read abundances assigned to the nodes and edges of this graph
89
+ then correspond to a flow in case of perfect data. If this is not the case, the abundance values can either
90
+ be minimally corrected to become a flow, or one can consider variations of the problem where e.g., the
91
+ superposition of the weighted paths is closest (or within a certain range) to the edge abundances [50,5].
92
+ Current safety algorithms for flow decompositions such as [58,27,26,28] compute paths appearing in all
93
+ possible flow decompositions (of any size), even though decompositions of minimum size are assumed to
94
+ better model the RNA assembly problem [30,48,55]. Even dropping the minimality constraint, but adding
95
+ other simple constraints easily renders the problem NP-hard (see e.g., [56]), motivating further study of
96
+ practical safe algorithms for NP-hard problems.
97
+ Contributions. Integer Linear Programming (ILP) is a general and flexible method that has been suc-
98
+ cessfully applied to solve NP-hard problems, including in Bioinformatics. In this paper, we consider graph
99
+ problems whose solution consists of a set of paths (i.e., not repeating nodes) that can be formulated in
100
+ ILP. We introduce a technique that, given an ILP formulation of such a graph problem, can enhance it
101
+ with additional variables and constraints in order to test the safety of a given set of paths. An obvious first
102
+ application of this safety test is to use it with a single path in a straightforward avoid-and-test approach,
103
+ using a standard two-pointer technique that has been used previously to find safe paths for flow decomposi-
104
+ tion. However, we find that a top-down recursive approach that uses the group-testing capability halves the
105
+ number of computationally-intensive ILP calls, resulting in a 3x speedup over the straightforward approach.
106
+ Additionally, we prove that computing all the safe paths for MFDs is an intractable problem, confirming
107
+ the above intuitive claim that if a problem is hard, then also its safety version is hard. We give this proof
108
+ in the appendix by showing that the NP-hardness reduction for MFD by [22] can be modified into a Turing
109
+ reduction from the UNIQUE 3SAT problem.
110
+ On the dataset [47] containing splice graphs from human, zebrafish and mouse transcriptomes, safe
111
+ paths for MFDs (SafeMFD) correctly recover up to 90% of the full RNA transcripts while maintaining a
112
+ 99% precision, outperforming, by a wide margin (25% increase), state-of-the-art safety approaches, such as
113
+ extended unitigs [35,24,29], safe paths for constrained path covers of the edges [8], and safe paths for all
114
+ flow decompositions [28,27,26,58]. On the harder dataset by [26], SafeMFD also dominates in a significant
115
+ proportion of splice graphs (built from t ≤ 15 RNA transcripts), recovering more than 95% of the full
116
+ transcripts while maintaining a 98% precision. For larger t, precision drastically drops (91% precision in the
117
+ entire dataset), suggesting that in more complex splice graphs smaller solutions are introduced as an artifact
118
+ of the combinatorial nature of the splice graph, and the minimality condition [30,48,55] is thus incorrect in
119
+ this domain.
120
+ 2
121
+ Methods
122
+ 2.1
123
+ Preliminaries
124
+ ILP models. In this paper we use ILP models as blackboxes, with as few assumptions as possible to further
125
+ underline the generality of our approach. Let M(V, C) be an ILP model consisting of a set V of variables
126
+ and a set C of constraints on these variables, built from an input graph G = (V, E). We make only two
127
+ assumptions on M. First, that a solution to this model consists of a given number k ≥ 1 of paths P1, . . . , Pk
128
+ in G (in this paper, paths do not repeat vertices). Second, we assume that the k paths are modeled via
129
+ binary edge variables xuvi, for all (u, v) ∈ E and for all i ∈ {1, . . . , k}. More specifically, for all i ∈ {1, . . . , k},
130
+ we require that the edges (u, v) ∈ E for which the corresponding variable xuvi equals 1 induce a path in G.
131
+ For example, if G is a DAG, it is a standard fact (see e.g., [49]) that a path from a given s ∈ V to a given
132
+ 2
133
+
134
+ t ∈ V (an s-t path) can be expressed with the following constraints:
135
+
136
+ (u,v)∈E
137
+ xuvi −
138
+
139
+ (v,u)∈E
140
+ xvui =
141
+
142
+
143
+
144
+
145
+
146
+ 0,
147
+ if v ∈ V \ {s, t},
148
+ 1,
149
+ if v = t,
150
+ −1,
151
+ if v = s.
152
+ (1)
153
+ If G is not a DAG, there are other types of constraints that can be added to the xuvi variables to ensure
154
+ that they induce a path; see, for example, the many formulations in [49]. We will assume that such constraints
155
+ are part of the set C of constraints of M(V, C), but their exact formulation is immaterial for our approach. In
156
+ fact, one could even add additional constraints to C to further restrict the solution space. For example, some
157
+ ILP models from [15,46] handle the case when the input also contains a set of paths (subpath constraints)
158
+ that must appear in at least one of the k solution paths.
159
+ Flow decomposition. In the flow decomposition problem we are given a flow network (V, E, f), where
160
+ G = (V, E) is a (directed) graph with unique source s ∈ V and unique sink t ∈ V , and f assigns a positive
161
+ integer flow value fuv to every edge (u, v) ∈ E. Flow conservation must hold for every node different from s
162
+ and t, namely, the sum of the flow values entering the node must equal the sum of the flow values exiting the
163
+ node. See Figure 1(a) for an example. We say that k s-t paths P1, . . . , Pk, with associated positive integer
164
+ weights w1, . . . , wk, are a flow decomposition (FD) if their superposition equals the flow f. Formally, for
165
+ every (u, v) ∈ E it must hold that
166
+
167
+ i∈{1,...,k} s.t.
168
+ (u,v)∈Pi
169
+ wi = fuv.
170
+ (2)
171
+ See Figures 1(b) and 1(c) for two examples. The number k of paths is also called the size of the flow
172
+ decomposition. In the minimum flow decomposition (MFD) problem, we need to find a flow decomposition
173
+ of minimum size.4 On DAGs, a flow decomposition into paths always exists [1], but in general graphs, cycles
174
+ may be necessary to decompose the flow (see e.g. [16] for different possible formulations of the problem).
175
+ For concreteness, we now describe the ILP models from [15] for finding a flow decomposition into k
176
+ weighted paths in a DAG. They consist of (i) modeling the k paths via the xuvi variables (with constraints
177
+ (1)), (ii) adding path-weight variables w1, . . . , wk, and (iii) requiring that these weighted paths form a flow
178
+ decomposition, via the following (non-linear) constraint:
179
+
180
+ i∈{1,...,k}
181
+ xuviwi = fuv,
182
+ ∀(u, v) ∈ E.
183
+ (3)
184
+ This constraint can then be easily linearized by introducing additional variables and constraints; see e.g. [15]
185
+ for these technical details. However, as mentioned above, the precise formulation of the ILP model M for a
186
+ problem is immaterial for our method. Only the two assumptions on M made above matter for obtaining
187
+ our results.
188
+ Safety. Given a problem on a graph G whose solutions consist of k paths in G, we say that a path P is safe
189
+ if for any solution P1, . . . , Pk to the problem, there exists some i ∈ {1, . . . , k} such that P is a subpath of Pi.
190
+ If the problem is given as an ILP model M, we also say that P is safe for M. We say that P is a maximal
191
+ safe path, if P is a safe path and there is no larger safe path containing P as subpath. [27] characterized safe
192
+ paths for all FDs (not necessarily of minimum size) using the excess flow fP of a path P, defined as the flow
193
+ on the first edge of P minus the flow on the edges out-going from the internal nodes of P, and different from
194
+ the edges of P (see Figure 1(d) for an example). It holds that P is safe for all FDs if and only if fP > 0 [27].
195
+ 4 In this paper we work only with integer flow values and weights for simplicity and since this is the most studied
196
+ version of the problem, see e.g., [30]. However, the problem can also be defined with fractional weights [41], and
197
+ in this case the two problems can have different minima on the same input [53]. This fractional case can also be
198
+ modeled by ILP [15], and all the results from our paper also immediately carry over to this variant.
199
+ 3
200
+
201
+
202
+ 8
203
+ 9
204
+ 9
205
+ 3
206
+ 5
207
+ 7
208
+ s
209
+ t
210
+ b
211
+ a
212
+ d
213
+ e
214
+ 7
215
+ c
216
+ 10
217
+ 10
218
+ f
219
+
220
+ s
221
+ t
222
+ b
223
+ a
224
+ d
225
+ e
226
+ c
227
+ f
228
+
229
+ s
230
+ t
231
+ b
232
+ a
233
+ d
234
+ e
235
+ c
236
+ f
237
+ 5
238
+ 3
239
+ 2
240
+ 7
241
+ 3
242
+ 4
243
+ 1
244
+ 9
245
+ 3
246
+ 3
247
+ Figure 1
248
+ Figure 2
249
+ (a) A flow network with source s and sink t.
250
+
251
+ 8
252
+ 9
253
+ 9
254
+ 3
255
+ 5
256
+ 7
257
+ s
258
+ t
259
+ b
260
+ a
261
+ d
262
+ e
263
+ 7
264
+ c
265
+ 10
266
+ 10
267
+ f
268
+
269
+ s
270
+ t
271
+ b
272
+ a
273
+ d
274
+ e
275
+ c
276
+ f
277
+
278
+ s
279
+ t
280
+ b
281
+ a
282
+ d
283
+ e
284
+ c
285
+ f
286
+ 5
287
+ 3
288
+ 2
289
+ 7
290
+ 3
291
+ 4
292
+ 1
293
+ 9
294
+ 3
295
+ 3
296
+ Figure 1
297
+ Figure 2
298
+
299
+ 8
300
+ 9
301
+ 9
302
+ 3
303
+ 5
304
+ 7
305
+ s
306
+ t
307
+ b
308
+ a
309
+ d
310
+ e
311
+ 7
312
+ c
313
+ 10
314
+ 10
315
+ f
316
+ 3
317
+ 3
318
+
319
+ s
320
+ t
321
+ b
322
+ a
323
+ d
324
+ e
325
+ c
326
+ f
327
+ xsai = 1
328
+ xabi = 1
329
+ xbci = 1
330
+ xcdi = 1
331
+ xdfi = 1
332
+ Figure 3
333
+ xadi = 0
334
+ xeti = 0
335
+ xdei = 0
336
+ xfti = 1
337
+ xsbi = 0
338
+ xbdi = 0
339
+ (b) An MFD into 4 paths of weights 5,3,7,2, respec-
340
+ tively. The green dashed path is a subpath of the
341
+ orange path.
342
+
343
+ 8
344
+ 9
345
+ 9
346
+ 3
347
+ 5
348
+ 7
349
+ s
350
+ t
351
+ b
352
+ a
353
+ d
354
+ e
355
+ 7
356
+ c
357
+ 10
358
+ 10
359
+ f
360
+
361
+ s
362
+ t
363
+ b
364
+ a
365
+ d
366
+ e
367
+ c
368
+ f
369
+
370
+ s
371
+ t
372
+ b
373
+ a
374
+ d
375
+ e
376
+ c
377
+ f
378
+ 5
379
+ 3
380
+ 2
381
+ 7
382
+ 3
383
+ 4
384
+ 1
385
+ 9
386
+ 3
387
+ 3
388
+ Figure 1
389
+ Figure 2
390
+
391
+ 8
392
+ 9
393
+ 9
394
+ 3
395
+ 5
396
+ 7
397
+ s
398
+ t
399
+ b
400
+ a
401
+ d
402
+ e
403
+ 7
404
+ c
405
+ 10
406
+ 10
407
+ f
408
+ 3
409
+ 3
410
+
411
+ s
412
+ t
413
+ b
414
+ a
415
+ d
416
+ e
417
+ c
418
+ f
419
+ xsai = 1
420
+ xabi = 1
421
+ xbci = 1
422
+ xcdi = 1
423
+ xdfi = 1
424
+ Figure 3
425
+ xadi = 0
426
+ xeti = 0
427
+ xdei = 0
428
+ xfti = 1
429
+ xsbi = 0
430
+ xbdi = 0
431
+ (c) An MFD into 4 paths of weights 3,4,1,9, respec-
432
+ tively. The green dashed path is a subpath of the pink
433
+ path.
434
+
435
+ 8
436
+ 9
437
+ 9
438
+ 3
439
+ 5
440
+ 7
441
+ s
442
+ t
443
+ b
444
+ a
445
+ d
446
+ e
447
+ 7
448
+ c
449
+ 10
450
+ 10
451
+ f
452
+
453
+ s
454
+ t
455
+ b
456
+ a
457
+ d
458
+ e
459
+ c
460
+ f
461
+
462
+ s
463
+ t
464
+ b
465
+ a
466
+ d
467
+ e
468
+ c
469
+ f
470
+ 5
471
+ 3
472
+ 2
473
+ 7
474
+ 3
475
+ 4
476
+ 1
477
+ 9
478
+ 3
479
+ 3
480
+ Figure 1
481
+ Figure 2
482
+
483
+ 8
484
+ 9
485
+ 9
486
+ 3
487
+ 5
488
+ 7
489
+ s
490
+ t
491
+ b
492
+ a
493
+ d
494
+ e
495
+ 7
496
+ c
497
+ 10
498
+ 10
499
+ f
500
+ 3
501
+ 3
502
+
503
+ s
504
+ t
505
+ b
506
+ a
507
+ d
508
+ e
509
+ 7
510
+ c
511
+ 10
512
+ f
513
+ 3
514
+ xsai = 1
515
+ xabi = 1
516
+ xbci = 1
517
+ xcdi = 1
518
+ xdfi = 1
519
+ (d) The two subpaths (red and blue) of the green
520
+ dashed path that are maximal safe paths for all FDs.
521
+ Fig. 1: Flow decompositions and safe paths. The flow network in (a) admits different MFDs, in (b) and in (c).
522
+ The path (s, a, b, c, d) (dashed green) is a maximal safe path for MFDs, i.e., it is a subpath of some path
523
+ of all MFDs and it cannot be extended without losing this property. However, the path (s, a, b, c, d) is not
524
+ safe for all FDs. Indeed, its two subpaths (s, a, b) (dashed red in (d)) and (b, c, d) (dashed blue in (d)) are
525
+ maximal safe paths for all FDs. To see this, note that the excess flow of (s, a, b) is 3, while the excess flow of
526
+ (s, a, b, c) (and of (s, a, b, c, d)) is −6.
527
+ 4
528
+
529
+ The excess flow can be computed in time linear in the length of P (assuming we have pre-computed the flow
530
+ outgoing from every node), giving thus a linear-time verification of whether P is safe.
531
+ A basic property of safe solutions is that any sub-solution of them is also safe. Computing safe paths
532
+ for MFDs can thus potentially lead to joining several safe paths for FDs, obtaining longer paths from the
533
+ unknown sequences we are trying to assemble. See Figure 1 for an example of a maximal safe path for MFDs
534
+ and two maximal subpaths of it that are safe for FDs.
535
+ 2.2
536
+ Finding Maximal Safe Paths for MFD via ILP
537
+ We now present a method for finding all maximal safe paths for MFD via ILP. The basic idea is to define an
538
+ inner “safety test” which can be repeatedly called as part of an outer algorithm over the entire instance to
539
+ find all maximal safe paths. Because calls to the ILP solver are expensive, the guiding choice for our overall
540
+ approach is to minimize the number of ILP calls. This inspires us to test the safety of a group of paths as the
541
+ inner safety test, which we achieve by augmenting our ILP model so that it can give us information about
542
+ the safety of the paths in the set. We use this to define a recursive algorithm to fully determine the safety
543
+ status of each path in a group of paths. We can then structure the safety test in either a top-down manner
544
+ (starting with long unsafe paths and shrinking them until they are safe) or a bottom-up manner (starting
545
+ with short safe paths and lengthening them until they become unsafe).
546
+ Safety test (inner algorithm) Let M(V, C) be an ILP model as discussed in Section 2.1; namely, its k
547
+ solution paths are modeled by binary variables xuvi for each (u, v) ∈ E and each i ∈ {1, . . . , k}. We assume
548
+ that M(V, C) is feasible (i.e., the problem admits at least one solution). We first show how to modify the
549
+ ILP model so that, for a given set of paths, it can tell us one of the following: (1) a set of paths that are not
550
+ safe (the remaining being of unknown status), or (2) that all paths are safe. The idea is to maximize the
551
+ number of paths that can be simultaneously avoided from the given set of paths.
552
+ Let P be a set of paths. For each path P ∈ P, we create an auxiliary binary variable γP that indicates:
553
+ γP ≡
554
+
555
+ 1
556
+ if P was avoided in the solution,
557
+ 0
558
+ otherwise.
559
+ (4)
560
+ Since the model solutions are paths (i.e., not repeating nodes), we can encode whether P appears in the
561
+ solution by whether all of the ℓ − 1 edges of P appear simultaneously. Using this fact, we add a new set of
562
+ constraints R(P) that include the γP indicator variables for each path P ∈ P:
563
+ R(P) := {xv1v2i + xv2v3i + · · · + xvℓ−1vℓi
564
+ ≤ ℓ − 1 − γP : ∀i ∈ {1, . . . , k}, ∀P ∈ P}.
565
+ (5)
566
+ Next, as the objective function of the ILP model, we require that it should maximize the number of
567
+ avoided paths from P, i,e., the sum of the γP variables:
568
+ max
569
+
570
+ P ∈P
571
+ γP .
572
+ (6)
573
+ All paths P such that γP = 1 are unsafe, since they were avoided in some minimum flow decomposition.
574
+ Conversely, if the objective value of Eq. (6) was 0, then γP = 0 for all paths in P, and it must be that all
575
+ paths in P are safe (if not, at least one path could be avoided and increase the objective). We encapsulate
576
+ this group testing ILP in a function GroupTest(M, P) that returns a set N ⊆ P with the properties that:
577
+ (1) if N = ∅, then all paths in P are safe, and (2) if N ̸= ∅, then all paths in N are unsafe (and |N| is
578
+ maximized).
579
+ We employ GroupTest(M, P) to construct a recursive procedure GetSafe(M, P) that determines all safe
580
+ paths in P, as shown in Algorithm 1.
581
+ 5
582
+
583
+
584
+ 8
585
+ 9
586
+ 9
587
+ 3
588
+ 5
589
+ 7
590
+ s
591
+ t
592
+ b
593
+ a
594
+ d
595
+ e
596
+ 7
597
+ c
598
+ 10
599
+ 10
600
+ f
601
+
602
+ s
603
+ t
604
+ b
605
+ a
606
+ d
607
+ e
608
+ c
609
+ f
610
+
611
+ s
612
+ t
613
+ b
614
+ a
615
+ d
616
+ e
617
+ c
618
+ f
619
+ 5
620
+ 3
621
+ 2
622
+ 7
623
+ 3
624
+ 4
625
+ 1
626
+ 9
627
+ 3
628
+ 3
629
+ Figure 1
630
+ Figure 2
631
+
632
+ 8
633
+ 9
634
+ 9
635
+ 3
636
+ 5
637
+ 7
638
+ s
639
+ t
640
+ b
641
+ a
642
+ d
643
+ e
644
+ 7
645
+ c
646
+ 10
647
+ 10
648
+ f
649
+ 3
650
+ 3
651
+
652
+ s
653
+ t
654
+ b
655
+ a
656
+ d
657
+ e
658
+ c
659
+ f
660
+ xsai = 1
661
+ xabi = 1
662
+ xbci = 1
663
+ xcdi = 1
664
+ xdfi = 0
665
+ Figure 3
666
+ xadi = 0
667
+ xeti = 1
668
+ xdei = 1
669
+ xfti = 0
670
+ xsbi = 0
671
+ xbdi = 0
672
+ Fig. 2: Illustration of modeling a solution path and a tested path via binary edge variables and safety
673
+ verification constraints. The ith solution path Pi is shown in orange, and a tested path P is shown in dashed
674
+ green. Constraint (5) includes xsai +xabi +xbci +xcdi +xdei ≤ 5−γP . This simplifies to γP ≤ 0, thus forcing
675
+ γP = 0, which indicates P was not avoided in the solution.
676
+ Algorithm 1: Testing a set of paths P for safety.
677
+ Input: A feasible ILP model M(V, C), and a set of paths P
678
+ Output: Those paths P ∈ P that are safe for M(V, C)
679
+ 1 Procedure GetSafe(M, P)
680
+ 2
681
+ N = GroupTest(M, P) if N = ∅ then
682
+ 3
683
+ return P
684
+ 4
685
+ else
686
+ 5
687
+ return GetSafe(M, P \ N)
688
+ We note that in the special case that |P| = 1, GetSafe(M, P) makes only a single call to the ILP via
689
+ GroupTest(M, P) to determine whether not the given path is safe. With this safety test for a single path, we
690
+ can easily adapt a standard two-pointer approach as the outer algorithm to find all maximal safe paths for
691
+ MFD by starting with some MFD solution P1, . . . , Pk of M(V, C). This same procedure was used in [26] to
692
+ find all maximal safe paths for FD, using an excess flow check as the inner safety algorithm.
693
+ Find all maximal safe paths (outer algorithm) We give two algorithms for finding all maximal safe
694
+ paths. Both algorithms use a similar approach, however the first uses a top-down approach starting from the
695
+ original full solution paths and reports all safe paths (these again must be maximal safe), and then trims all
696
+ the unsafe paths to find new maximal safe paths. The second is bottom-up in that it tries to extend known
697
+ safe subpaths until they cannot be further extended (and at this point must be maximal safe). We present
698
+ the first algorithm in detail and defer discussion of the second to the appendix.
699
+ We say a set of subpaths T = {Pi[li, ri]} is a trimming core provided that for any unreported maximal
700
+ safe path P = Pi[l, r], there is a Pi[li, ri] ∈ T , where li ≤ l ≤ r ≤ ri.
701
+ We will use the original k solution paths {Pi} as our initial trimming core; the complete algorithm is
702
+ given in Algorithm 2. See Fig. 3 in the appendix for an illustration of the algorithm’s initial steps. The
703
+ algorithm first checks if any of the paths in T are safe; if so, these are reported as maximal safe. For those
704
+ paths that were unsafe, it then considers trimming one vertex from the left and one vertex from the right
705
+ to create new subpaths. Of these subpaths, some may be contained in a safe path in T ; these subpaths
706
+ can be ignored as they are not maximal safe. The algorithm recurses on those subpaths whose safety status
707
+ cannot be determined (lines 6–10). In this way, the algorithm maintains the invariant that no paths in T are
708
+ properly contained in a safe path; thus paths reported in line 4 must be maximal safe.
709
+ 6
710
+
711
+ Algorithm 2: An algorithm to compute all maximal safe paths that can be trimmed from a
712
+ trimming core set T .
713
+ Input: An ILP model M and a trimming core set T
714
+ Output: All maximal safe paths for M that are trimmed subpaths of T
715
+ 1 Procedure AllMaxSafe-TopDown(M, T )
716
+ 2
717
+ S = GetSafe(M, T ) for Pi[li, ri] ∈ S do
718
+ 3
719
+ output Pi[li, ri]
720
+ 4
721
+ U = T \ S L = {Pi[li + 1, ri] : Pi[li, ri] ∈ U, (ri = |Pi| or Pi[li + 1, ri + 1] ∈ U)}
722
+ R = {Pi[li, ri − 1] : Pi[li, ri] ∈ U, (li = 1 or Pi[li − 1, ri − 1] ∈ U)} P = L ∪ R if P ̸= ∅ then
723
+ 5
724
+ AllMaxSafe-TopDown(M, P)
725
+ 3
726
+ Experiments
727
+ To test the performance of our methods, we computed safe paths using different safety approaches and re-
728
+ ported the quality and running time performances as described below. Additional details on the experimental
729
+ setup are given in the appendix.
730
+ Implementation details – SafeMFD. We implemented the previously described algorithms to compute
731
+ all maximal safe paths for minimum flow decompositions in Python. The implementation, SafeMFD, uses
732
+ the package NetworkX [20] for graph processing and the package gurobipy [19] to model and solve the ILPs
733
+ and it is openly available5. Our fastest variant (see Table 2 in the appendix for a comparison of running
734
+ times) implements Algorithm 2 using the group testing in Algorithm 1. We used this variant to compare
735
+ against other safety approaches. All tested variants of SafeMFD implement the following two optimizations:
736
+ 1.
737
+ Before processing an input flow graph we contract it using Y-to-V contraction [51], which is known [30]
738
+ to maintain (M)FD solution paths. Moreover, since edges in the contracted graph correspond to extended
739
+ unitigs [35,24,29], source-to-sink edges are further removed from the contracted graph and reported as
740
+ safe. As such, our algorithms compute all maximal safe paths for funnels [17,26] without using the ILP.
741
+ 2.
742
+ Before testing the safety of a path we check if its excess-flow [26] is positive. If this is the case, the
743
+ path is removed from the corresponding test. Having positive excess flow implies safety for all flow
744
+ decomposition and thus also safety for minimum flow decompositions.
745
+ Safety approaches tested. We compare the following state-of-the-art safety approaches:
746
+ EUnitigs:
747
+ Maximal paths made up of a prefix of nodes with in-degree one followed by nodes with out-
748
+ degree one; also called extended unitigs [51,35,24,29]. We use the C++ implementation provided by Khan
749
+ et al. [26] (which computes only the extended unitigs contained in FD paths).
750
+ SafeFlow:
751
+ Maximal safe paths for all flow decompositions [26]. We use the C++ implementation provided
752
+ by Khan et al. [26].
753
+ SafeMFD:
754
+ Maximal safe paths for all minimum flow decompositions, as proposed in this work. Every
755
+ flow graph processed is given a time budget of 2 minutes. If a flow graph consumes its time budget, the
756
+ solution of SafeFlow is output instead.
757
+ SafeEPC:
758
+ Maximal safe paths for all constrained path covers of edges. Previous authors [8,26] have con-
759
+ sidered safe path covers of the nodes, but for a more fair comparison, we instead use path covers of edges.
760
+ To this end, we transform the input graphs by splitting every edge by adding a node in the middle and
761
+ run the C++ implementation provided by the authors of [8]. Since flow decompositions are path covers
762
+ of edges, safe paths for all edge path covers are subpaths of safe paths for MFD. However, we restrict
763
+ the path covers to those of minimum size and minimum size plus one, as recommended by the authors
764
+ of [8] to obtain good coverage results while maintaining high precision.
765
+ 5 https://github.com/algbio/mfd-safety
766
+ 7
767
+
768
+ All safety approaches require a post processing step for removing duplicates, prefixes and suffixes. We
769
+ use the C++ implementation provided by [26] for this purpose.
770
+ Datasets. We use two datasets of flow graphs inspired by RNA transcript assembly. The datasets were
771
+ created by simulating abundances on a set of transcripts and then perfectly superposing them into a splice
772
+ graphs that are guaranteed to respect flow conservation. As such, the ground truth corresponds to a flow
773
+ decomposition (not necessarily minimum). To avoid a skewed picture of our results we filtered out trivial
774
+ instances with a unique flow decomposition (or funnels, see [17,26]) from the two datasets.6
775
+ Catfish:
776
+ Created by [48], it includes 100 simulated human, mouse and zebrafish transcriptomes using Flux-
777
+ Simulator [18] as well as 1,000 experiments from the Sequence Read Archive simulating abundances using
778
+ Salmon [40]. We took one experiment per dataset, which corresponds to 27,696 non-trivial flow graphs.
779
+ RefSim:
780
+ Created by [8] from the Ensembl [57] annotated transcripts of GRCh.104 homo sapiens reference
781
+ genome, and later augmented by Khan et al. [26] with simulated abundances using the RNASeqRead-
782
+ Simulator [32]. This dataset has 10,323 non-trivial graphs.
783
+ Quality metrics. We use the same quality metrics employed by previous multi-assembly safety approaches [8,26].
784
+ We provide a high-level description of them for completeness.
785
+ Weighted precision of reported paths:
786
+ As opposed to normal precision, the weighted version considers
787
+ the length of the reported subpaths. It is computed as the total length of the correctly reported subpaths
788
+ divided by the total length of all reported subpaths. A reported subpath is considered correct if and only
789
+ if it is a subpath of some path in the ground truth (exact alignment of exons/nodes).
790
+ Maximum coverage of a ground truth path P:
791
+ The longest segment of P covered by some reported
792
+ subpath (exact alignment of exons/nodes), divided by |P|.
793
+ We compute the weighted precision of a graph as the average weighted precision over all reported
794
+ paths in the graph, and the maximum coverage of a graph as the average maximum coverage over all
795
+ ground truth paths in the graph.
796
+ F-Score of a graph:
797
+ Harmonic mean between weighted precision and maximum coverage of a graph,
798
+ which assigns a global score to the corresponding approach on the graph.
799
+ These metrics are computed per flow graph and reported as an average. In the case of the Catfish dataset
800
+ the metrics are computed in terms of exons (nodes), since genomic coordinates of exons are missing, whereas
801
+ in the case of the RefSim dataset the metrics are computed in terms of genomic positions, as this information
802
+ is present in the input.
803
+ 4
804
+ Results and Discussion
805
+ In the Catfish dataset, EUnitigs and SafeFlow ran in less than a second, while SafeEPC took approximately
806
+ 30 seconds to compute. On the other hand, solving a harder problem, SafeMFD took approximately 1.5
807
+ hours to compute in the rest of the dataset, timing out in only 54 graphs (we use a cutoff of 2 minutes), i.e.,
808
+ only 0.2% of the entire dataset. This equates to only 0.2 seconds on average per solved graph, underlying
809
+ the scalability of our approach.
810
+ Table 1 shows that SafeMFD, on average, covers close to 90% of the ground truth paths, while maintaining
811
+ a high precision (99%). This corresponds to an increase of approximately 25% in coverage against its closest
812
+ competitor SafeFlow. SafeMFD also dominates in the combined metric of F-Score, being the only safe
813
+ approach with F-Score over 90%. Figure 4 in the appendix shows the metrics on graphs grouped by number
814
+ t of ground truth paths, indicating the dominance in coverage and F-Score of SafeMFD across all values of
815
+ t, and indicating that the decrease in precision appears for large values of t (t ≥ 12).
816
+ 6 The exact datasets used in our experiments can be found at https://zenodo.org/record/7182096.
817
+ 8
818
+
819
+ Table 1: Summary of quality metrics for both datasets. For Catfish, the metrics are computed in terms of
820
+ nodes/exons and for RefSim in terms of genomic positions; t is the number of ground truth paths.
821
+ Dataset
822
+ Graphs
823
+ Algorithm Max. Coverage Wt. Precision F-Score
824
+ Catfish
825
+ All
826
+ (100%)
827
+ EUnitigs
828
+ 0.60
829
+ 1.00
830
+ 0.74
831
+ SafeEPC
832
+ 0.60
833
+ 0.99
834
+ 0.74
835
+ SafeFlow
836
+ 0.71
837
+ 1.00
838
+ 0.82
839
+ SafeMFD
840
+ 0.88
841
+ 0.99
842
+ 0.93
843
+ RefSim
844
+ t ≤ 10
845
+ (68%)
846
+ EUnitigs
847
+ 0.72
848
+ 1.00
849
+ 0.83
850
+ SafeEPC
851
+ 0.73
852
+ 1.00
853
+ 0.84
854
+ SafeFlow
855
+ 0.84
856
+ 1.00
857
+ 0.91
858
+ SafeMFD
859
+ 0.97
860
+ 0.99
861
+ 0.98
862
+ t ≤ 15
863
+ (84%)
864
+ EUnitigs
865
+ 0.70
866
+ 1.00
867
+ 0.82
868
+ SafeEPC
869
+ 0.71
870
+ 1.00
871
+ 0.83
872
+ SafeFlow
873
+ 0.83
874
+ 1.00
875
+ 0.90
876
+ SafeMFD
877
+ 0.96
878
+ 0.98
879
+ 0.97
880
+ All
881
+ (100%)
882
+ EUnitigs
883
+ 0.68
884
+ 1.00
885
+ 0.80
886
+ SafeEPC
887
+ 0.69
888
+ 0.99
889
+ 0.81
890
+ SafeFlow
891
+ 0.81
892
+ 1.00
893
+ 0.89
894
+ SafeMFD
895
+ 0.93
896
+ 0.91
897
+ 0.90
898
+ In the harder RefSim dataset, EUnitigs and SafeFlow also ran in less than a second, while SafeEPC
899
+ took approximately 2 minutes. In this case, SafeMFD ran out of time in 1,562 graphs (15% of the entire
900
+ dataset); however, recall that in these experiments we allow a time budget of only 2 minutes. In the rest of
901
+ the dataset, it took approximately 7.5 hours in total, corresponding to only 3 seconds on average per graph,
902
+ again underlying that our method, even though it solves many NP-hard problems for each input graph,
903
+ overall scales sufficiently well.
904
+ Table 1 shows that again SafeMFD dominates in coverage, being the only approach obtaining coverage
905
+ over 90%, with is a 15% improvement over SafeFlow. This time its precision drops to close to 90%, and
906
+ obtaining an F-Score of 90%, very similar to its closest competitor, SafeFlow. However, recall that coverage
907
+ is computed only from correctly aligned paths, thus the drop in precision comes only from safe paths not
908
+ counting in the coverage metric. If we restrict the metrics to graphs with at most 15 ground truth paths,
909
+ which is still a significant proportion (84%) of the entire dataset, then SafeMFD has a very high precision
910
+ (98%) while improving coverage by 15% with respect to SafeFlow. Thus, the drop in precision occurs in
911
+ graphs with a large number of ground truth paths, which can also be corroborated by Figure 5 in the
912
+ appendix.
913
+ These drops in precision (both in RefSim and Catfish) for large t can be explained by the fact that a
914
+ larger number of ground truth paths produces more complex splice graphs and introduces more artificial
915
+ solutions of potentially smaller size. As such, the larger t, the less likely that the ground truth is a minimum
916
+ flow decomposition of the graph, and thus the more likely that SafeMFD reports incorrect solutions. This
917
+ motivates future work on safety not only on minimum flow decompositions but also in flow decompositions
918
+ of at most a certain size, analogously to how it is done for SafeEPC. This is still easily achievable with
919
+ our framework by just changing the ILP blackbox, and keeping everything else unchanged (e.g., the inner
920
+ and outer algorithms). Namely, instead of formulating the ILP model M(V, C) to admit solutions of exactly
921
+ optimal k paths, it can be changed to allow solutions of at most some k′ paths, with k′ greater than the
922
+ optimal k. If k′ is also greater than the number of ground truth paths in these complex graphs, then safe
923
+ paths are fully correct, meaning that we overall increase precision.
924
+ 9
925
+
926
+ 5
927
+ Conclusion
928
+ RNA assembly is a difficult problem in practice, with even the top tools reporting low precision values. While
929
+ there are still many issues that can introduce uncertainty in practice, we can now provide a major source
930
+ of additional information during the process: which RNA fragments must be included in any parsimonious
931
+ explanation of the data? Though others have considered RNA assembly in the safety framework [58,26], we
932
+ are the first to show that safety can be practically used even when we look for optimal (i.e., minimum) size
933
+ solutions. Our experimental results show that safe paths for MFD clearly outperform other safe approaches
934
+ for the Catfish dataset, commonly used in this field. On a significant proportion of the second dataset, safe
935
+ paths for MFD still significantly outperforms other safe methods.
936
+ More generally, this is the first work to show that the safety framework can be practically applied to
937
+ NP-hard problems, where the inner algorithm is an efficient test of safety of a group of paths, and the outer
938
+ algorithm guides the applications of this test. Because our method was very successful on our test data set,
939
+ there is strong motivation to try the approach to on other NP-hard graph problems whose solutions are
940
+ sets of paths. For example, we could study other variations on MFD, such as finding flow decompositions
941
+ minimizing the longest path (NP-hard when flow values are integer [4,43]). The approach given in this paper
942
+ can also be directly extended to find decompositions into both cycles and paths [16], though not trails and
943
+ walks, because they repeat edges. We could also formulate a safety test for classic NP-hard graph problems
944
+ like Hamiltonian path.
945
+ Acknowledgements This work was partially funded by the European Research Council (ERC) under
946
+ the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 851093,
947
+ SAFEBIO), partially by the Academy of Finland (grants No. 322595, 352821, 346968), and partially by
948
+ the US National Science Foundation (NSF) (grants No. 1759522, 1920954).
949
+ References
950
+ 1. Ravindra K Ahuja, Thomas L Magnanti, and James B Orlin. Network flows. Cambridge, Mass.: Alfred P. Sloan
951
+ School of Management, Massachusetts, 1988.
952
+ 2. Jasmijn A Baaijens, Leen Stougie, and Alexander Sch¨onhuth. Strain-aware assembly of genomes from mixed
953
+ samples using flow variation graphs. In International Conference on Research in Computational Molecular Biology,
954
+ pp. 221–222. Springer, 2020.
955
+ 3. Jasmijn A Baaijens, Bastiaan Van der Roest, Johannes K¨oster, Leen Stougie, and Alexander Sch¨onhuth. Full-
956
+ length de novo viral quasispecies assembly through variation graph construction. Bioinformatics, 35(24):5086–
957
+ 5094, 2019.
958
+ 4. Georg Baier. Flows with path restrictions. Cuvillier Verlag, 2004.
959
+ 5. Elsa Bernard, Laurent Jacob, Julien Mairal, and Jean-Philippe Vert. Efficient RNA isoform identification and
960
+ quantification from RNA-Seq data with network flows. Bioinformatics, 30(17):2447–2455, 2014.
961
+ 6. V. Bonnici, G. Franco, and V. Manca. Spectral concepts in genome informational analysis. Theoretical Computer
962
+ Science, 894:23–30, 2021. Building Bridges – Honoring Nataˇsa Jonoska on the Occasion of Her 60th Birthday.
963
+ 7. Manuel C´aceres, Massimo Cairo, Andreas Grigorjew, Shahbaz Khan, Brendan Mumey, Romeo Rizzi, Alexan-
964
+ dru I. Tomescu, and Lucia Williams.
965
+ Width helps and hinders splitting flows.
966
+ In Shiri Chechik, Gonzalo
967
+ Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms, ESA
968
+ 2022, September 5-9, 2022, Berlin/Potsdam, Germany, volume 244 of LIPIcs, pp. 31:1–31:14. Schloss Dagstuhl
969
+ - Leibniz-Zentrum f¨ur Informatik, 2022.
970
+ 8. Manuel C´aceres, Brendan Mumey, Edin Husi´c, Romeo Rizzi, Massimo Cairo, Kristoffer Sahlin, and Alexandru I.
971
+ Tomescu. Safety in multi-assembly via paths appearing in all path covers of a dag. IEEE/ACM Transactions on
972
+ Computational Biology and Bioinformatics, 2022. Accepted.
973
+ 9. Massimo Cairo, Shahbaz Khan, Romeo Rizzi, Sebastian S. Schmidt, Alexandru I. Tomescu, and Elia C. Ziron-
974
+ delli. The hydrostructure: a universal framework for safe and complete algorithms for genome assembly. arXiv,
975
+ abs/2011.12635, 2021.
976
+ 10. Massimo Cairo, Paul Medvedev, Nidia Obscura Acosta, Romeo Rizzi, and Alexandru I Tomescu. An optimal o
977
+ (nm) algorithm for enumerating all walks common to all closed edge-covering walks of a graph. ACM Transactions
978
+ on Algorithms (TALG), 15(4):1–17, 2019.
979
+ 10
980
+
981
+ 11. Kun-Mao Chao, Ross C. Hardison, and Webb Miller. Locating well-conserved regions within a pairwise alignment.
982
+ Bioinformatics, 9(4):387–396, 08 1993.
983
+ 12. Jiao Chen, Yingchao Zhao, and Yanni Sun. De novo haplotype reconstruction in viral quasispecies using paired-
984
+ end read guided path finding. Bioinformatics, 34(17):2927–2935, 2018.
985
+ 13. Rami Cohen, Liane Lewin-Eytan, Joseph Seffi Naor, and Danny Raz. On the effect of forwarding table size on SDN
986
+ network utilization. In IEEE INFOCOM 2014-IEEE conference on computer communications, pp. 1734–1742.
987
+ IEEE, 2014.
988
+ 14. Marie Costa. Persistency in maximum cardinality bipartite matchings. Oper. Res. Lett., 15(3):143–9, 1994.
989
+ 15. Fernando H. C. Dias, Lucia Williams, Brendan Mumey, and Alexandru I. Tomescu. Fast, Flexible, and Exact
990
+ Minimum Flow Decompositions via ILP. In RECOMB 2022 - 26th Annual International Conference on Research
991
+ in Computational Molecular Biology, volume 13278 of Lecture Notes in Computer Science, pp. 230–245. Springer,
992
+ 2022.
993
+ 16. Fernando HC Dias, Lucia Williams, Brendan Mumey, and Alexandru I Tomescu. Minimum flow decomposition
994
+ in graphs with cycles using integer linear programming. arXiv preprint arXiv:2209.00042, 2022.
995
+ 17. Marcelo Garlet Millani, Hendrik Molter, Rolf Niedermeier, and Manuel Sorge. Efficient algorithms for measuring
996
+ the funnel-likeness of dags. Journal of Combinatorial Optimization, 39(1):216–245, 2020.
997
+ 18. Thasso Griebel, Benedikt Zacher, Paolo Ribeca, Emanuele Raineri, Vincent Lacroix, Roderic Guig´o, and Michael
998
+ Sammeth. Modelling and simulating generic rna-seq experiments with the flux simulator. Nucleic acids research,
999
+ 40(20):10073–10083, 2012.
1000
+ 19. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2021.
1001
+ 20. Aric Hagberg, Pieter Swart, and Daniel S Chult. Exploring network structure, dynamics, and function using
1002
+ networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2008.
1003
+ 21. P. L. Hammer, P. Hansen, and B. Simeone. Vertices belonging to all or to no maximum stable sets of a graph.
1004
+ SIAM Journal on Algebraic Discrete Methods, 3(4):511–522, 1982.
1005
+ 22. Tzvika Hartman, Avinatan Hassidim, Haim Kaplan, Danny Raz, and Michal Segalov. How to split a flow? In
1006
+ 2012 Proceedings IEEE INFOCOM, pp. 828–836. IEEE, 2012.
1007
+ 23. Chi-Yao Hong, Srikanth Kandula, Ratul Mahajan, Ming Zhang, Vijay Gill, Mohan Nanduri, and Roger Wat-
1008
+ tenhofer. Achieving high utilization with software-driven wan. In Proceedings of the ACM SIGCOMM 2013
1009
+ conference on SIGCOMM, pp. 15–26, 2013.
1010
+ 24. Benjamin Grant Jackson. Parallel methods for short read assembly. PhD thesis, Iowa State University, 2009.
1011
+ 25. John D. Kececioglu and Eugene W. Myers. Combinatorial algorithms for DNA sequence assembly. Algorithmica,
1012
+ 13(1/2):7–51, 1995.
1013
+ 26. Shahbaz Khan, Milla Kortelainen, Manuel C´aceres, Lucia Williams, and Alexandru I Tomescu. Improving RNA
1014
+ assembly via safety and completeness in flow decompositions. Journal of Computational Biology, 2022.
1015
+ 27. Shahbaz Khan, Milla Kortelainen, Manuel C´aceres, Lucia Williams, and Alexandru I Tomescu. Safety and com-
1016
+ pleteness in flow decompositions for RNA assembly. In International Conference on Research in Computational
1017
+ Molecular Biology, pp. 177–192. Springer, 2022.
1018
+ 28. Shahbaz Khan and Alexandru I Tomescu. Optimizing safe flow decompositions in DAGs. In 30th Annual Euro-
1019
+ pean Symposium on Algorithms (ESA 2022). Schloss Dagstuhl–Leibniz-Zentrum f¨ur Informatik GmbH, Dagstuhl
1020
+ Publishing, 2022.
1021
+ 29. Carl Kingsford, Michael C Schatz, and Mihai Pop. Assembly complexity of prokaryotic genomes using short
1022
+ reads. BMC bioinformatics, 11(1):1–11, 2010.
1023
+ 30. Kyle Kloster, Philipp Kuinke, Michael P O’Brien, Felix Reidl, Fernando S´anchez Villaamil, Blair D Sullivan,
1024
+ and Andrew van der Poel. A practical fpt algorithm for flow decomposition and transcript assembly. In 2018
1025
+ Proceedings of the Twentieth Workshop on Algorithm Engineering and Experiments (ALENEX), pp. 75–86. SIAM,
1026
+ 2018.
1027
+ 31. Jingyi Jessica Li, Ci-Ren Jiang, James B Brown, Haiyan Huang, and Peter J Bickel. Sparse linear modeling of
1028
+ next-generation mRNA sequencing (RNA-Seq) data for isoform discovery and abundance estimation. Proceedings
1029
+ of the National Academy of Sciences, 108(50):19867–19872, 2011.
1030
+ 32. Wei Li. RNASeqReadSimulator: a simple RNA-seq read simulator, 2014.
1031
+ 33. Wei Li, Jianxing Feng, and Tao Jiang. IsoLasso: a LASSO regression approach to RNA-Seq based transcriptome
1032
+ assembly. Journal of Computational Biology, 18(11):1693–1707, 2011.
1033
+ 34. Cong Ma, Hongyu Zheng, and Carl Kingsford. Exact transcript quantification over splice graphs. Algorithms for
1034
+ Molecular Biology, 16(1):1–15, 2021.
1035
+ 35. Paul Medvedev, Konstantinos Georgiou, Gene Myers, and Michael Brudno. Computability of models for sequence
1036
+ assembly. In WABI, pp. 289–301, 2007.
1037
+ 11
1038
+
1039
+ 36. Brendan Mumey, Samareh Shahmohammadi, Kathryn McManus, and Sean Yaw. Parity balancing path flow
1040
+ decomposition and routing. In 2015 IEEE Globecom Workshops (GC Wkshps), pp. 1–6. IEEE, 2015.
1041
+ 37. Dalit Naor and Douglas L Brutlag. On near-optimal alignments of biological sequences. Journal of Computational
1042
+ Biology, 1(4):349–366, 1994.
1043
+ 38. Jan Peter Ohst. On the Construction of Optimal Paths from Flows and the Analysis of Evacuation Scenarios.
1044
+ PhD thesis, University of Koblenz and Landau, Germany, 2015.
1045
+ 39. Nils Olsen, Natalia Kliewer, and Lena Wolbeck.
1046
+ A study on flow decomposition methods for scheduling of
1047
+ electric buses in public transport based on aggregated time–space network models. Central European Journal of
1048
+ Operations Research, pp. 1–37, 2020.
1049
+ 40. Rob Patro, Geet Duggal, and Carl Kingsford.
1050
+ Salmon: accurate, versatile and ultrafast quantification from
1051
+ RNA-seq data using lightweight-alignment. BioRxiv, p. 021592, 2015.
1052
+ 41. Mihaela Pertea, Geo M Pertea, Corina M Antonescu, Tsung-Cheng Chang, Joshua T Mendell, and Steven L
1053
+ Salzberg. StringTie enables improved reconstruction of a transcriptome from RNA-seq reads. Nature biotechnol-
1054
+ ogy, 33(3):290–295, 2015.
1055
+ 42. Pavel A. Pevzner, Haixu Tang, and Michael S. Waterman. An Eulerian path approach to DNA fragment assembly.
1056
+ Proceedings of the National Academy of Sciences, 98(17):9748–9753, 2001.
1057
+ 43. Krzysztof Pie´nkosz and Kamil Ko�lty´s. Integral flow decomposition with minimum longest path length. European
1058
+ Journal of Operational Research, 247(2):414–420, 2015.
1059
+ 44. Susana Posada-C´espedes, David Seifert, Ivan Topolsky, Kim Philipp Jablonski, Karin J Metzner, and Niko Beeren-
1060
+ winkel. V-pipe: a computational pipeline for assessing viral genetic diversity from high-throughput data. Bioin-
1061
+ formatics, 2021.
1062
+ 45. Amatur Rahman and Paul Medvedev. Assembler artifacts include misassembly because of unsafe unitigs and
1063
+ under-assembly because of bidirected graphs. Genome Research, pp. gr–276601, 2022.
1064
+ 46. Palash Sashittal, Chuanyi Zhang, Jian Peng, and Mohammed El-Kebir. Jumper enables discontinuous transcript
1065
+ assembly in coronaviruses. Nature Communications, 12(1):6728, 2021.
1066
+ 47. Mingfu Shao and Carl Kingsford. Accurate assembly of transcripts through phase-preserving graph decomposi-
1067
+ tion. Nature biotechnology, 35(12):1167–1169, 2017.
1068
+ 48. Mingfu Shao and Carl Kingsford. Theory and a heuristic for the minimum path flow decomposition problem.
1069
+ IEEE/ACM transactions on computational biology and bioinformatics, 16(2):658–670, 2017.
1070
+ 49. Leonardo Taccari. Integer programming formulations for the elementary shortest path problem. European Journal
1071
+ of Operational Research, 252(1):122–130, 2016.
1072
+ 50. Alexandru I Tomescu, Anna Kuosmanen, Romeo Rizzi, and Veli M¨akinen. A novel min-cost flow method for
1073
+ estimating transcript expression with RNA-Seq. In BMC bioinformatics, volume 14, pp. S15:1–S15:10. Springer,
1074
+ 2013.
1075
+ 51. Alexandru I. Tomescu and Paul Medvedev. Safe and complete contig assembly through omnitigs. Journal of
1076
+ Computational Biology, 24(6):590–602, 2017. Preliminary version appeared in RECOMB 2016.
1077
+ 52. L.G. Valiant and V.V. Vazirani. NP is as easy as detecting unique solutions. Theoretical Computer Science,
1078
+ 47:85–93, 1986.
1079
+ 53. B. Vatinlen, F. Chauvet, P. Chr´etienne, and P. Mahey. Simple bounds and greedy algorithms for decomposing a
1080
+ flow into a minimal set of paths. European Journal of Operational Research, 185(3):1390–1401, 2008.
1081
+ 54. Martin Vingron and Patrick Argos. Determination of reliable regions in protein sequence alignments. Protein
1082
+ Engineering, Design and Selection, 3(7):565–569, 1990.
1083
+ 55. Lucia Williams, Alexandru Tomescu, Brendan Marshall Mumey, et al. Flow decomposition with subpath con-
1084
+ straints.
1085
+ In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Schloss Dagstuhl-
1086
+ Leibniz-Zentrum f¨ur Informatik, 2021.
1087
+ 56. Lucia Williams, Alexandru I. Tomescu, and Brendan Mumey. Flow decomposition with subpath constraints.
1088
+ IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2022. Accepted.
1089
+ 57. Andrew D Yates, Premanand Achuthan, Wasiu Akanni, James Allen, Jamie Allen, Jorge Alvarez-Jarreta, M Rid-
1090
+ wan Amode, Irina M Armean, Andrey G Azov, Ruth Bennett, et al. Ensembl 2020. Nucleic acids research,
1091
+ 48(D1):D682–D688, 2020.
1092
+ 58. Hongyu Zheng, Cong Ma, and Carl Kingsford. Deriving ranges of optimal estimated transcript expression due
1093
+ to nonidentifiability. Journal of Computational Biology, 29(2):121–139, 2022. Presented at RECOMB 2021.
1094
+ 12
1095
+
1096
+ A
1097
+ Additional Figures
1098
+ P1
1099
+ P2
1100
+ P3
1101
+ P4
1102
+ P1
1103
+ P2
1104
+ P3
1105
+ P4
1106
+ a) Group Testing - Initial Iteration
1107
+ c) Group Testing - Second Iteration
1108
+ b) Identifying Safe Paths
1109
+ P2
1110
+ P2
1111
+ P4
1112
+ P4
1113
+ (a) First group test
1114
+ P1
1115
+ P2
1116
+ P3
1117
+ P4
1118
+ P1
1119
+ P2
1120
+ P3
1121
+ P4
1122
+ a) Group Testing - Initial Iteration
1123
+ c) Group Testing - Second Iteration
1124
+ b) Identifying Safe Paths
1125
+ P2
1126
+ P2
1127
+ P4
1128
+ P4
1129
+ (b) Result: {P1, P3} are safe, {P2, P4}
1130
+ are unsafe
1131
+ P1
1132
+ P2
1133
+ P3
1134
+ P4
1135
+ P1
1136
+ P2
1137
+ P3
1138
+ P4
1139
+ a) Group Testing - Initial Iteration
1140
+ c) Group Testing - Second Iteration
1141
+ b) Identifying Safe Paths
1142
+ P2
1143
+ P2
1144
+ P4
1145
+ P4
1146
+ (c) Second group test
1147
+ Fig. 3: Illustration of the initial group tests performed by Algorithm 2. Fig. 3(a) shows the first group test
1148
+ (using Algorithm 1) on MFD solution paths {P1, P2, P3, P4}; suppose {P1, P3} were safe (Fig. 3(b)); these
1149
+ are then reported as maximal safe. In this case we trim {P2, P4} on both the left and right and make the
1150
+ next group test shown in Fig. 3(c).
1151
+ 13
1152
+
1153
+ (a) Weighted Precision
1154
+ (b) Maximum Coverage
1155
+ (c) F-Score
1156
+ Fig. 4: Quality metrics on graphs distributed by number of paths in the ground truth for the Catfish dataset.
1157
+ The metrics are computed in terms of exons/nodes.
1158
+ (a) Weighted Precision
1159
+ (b) Maximum Coverage
1160
+ (c) F-Score
1161
+ Fig. 5: Quality metrics on graphs distributed by number of paths in the ground truth for the RefSim dataset.
1162
+ The metrics are computed in terms of genomic positions.
1163
+ B
1164
+ Additional Algorithms and Experimental Results
1165
+ B.1
1166
+ The bottom-up algorithm
1167
+ Algorithm 3, detailed below, uses a bottom-up group-testing strategy to find all maximal safe paths.
1168
+ Definition 1. We say a set of subpaths E = {Pi[li, ri]} is an extending core provided all paths in E are safe
1169
+ and for any unreported maximal safe path P = Pi[l, r], there is a Pi[li, ri] ∈ E, where l ≤ li ≤ ri ≤ r.
1170
+ Note that maximal FD-safe subpaths provide an extending core (as well just the set of all single-edge
1171
+ subpaths in each path). Algorithm 3 provides an algorithm to find all maximal safe paths based on group
1172
+ testing, starting from an extending core. The idea is to try both left-extending (by one) and right-extending
1173
+ (by one) each subpath in the core; if neither of these extensions are safe, then we know that that core subpath
1174
+ must be maximal safe. Testing all extensions can done quickly using Algorithm 1. We then recurse on a new
1175
+ core set consisting of those extensions that were found to be safe.
1176
+ 14
1177
+
1178
+ 1.0
1179
+ 0.8
1180
+ 0.6
1181
+ 0.4
1182
+ 0.2
1183
+ EUnitigs
1184
+ SafeEPC
1185
+ SafeFlow
1186
+ SafeMFD
1187
+ 0.0
1188
+ 3
1189
+ 4
1190
+ 5
1191
+ 6
1192
+ 7
1193
+ 8
1194
+ 9
1195
+ 10 11 12
1196
+ 13
1197
+ 14
1198
+ 15
1199
+ # Ground truth paths1.0
1200
+ 0.8
1201
+ 0.6
1202
+ 0.4
1203
+ 0.2
1204
+ EUnitigs
1205
+ SafeEPC
1206
+ SafeFlow
1207
+ 0.0
1208
+ SafeMFD
1209
+ 3
1210
+ 4
1211
+ 5
1212
+ 6
1213
+ 7
1214
+ 8
1215
+ 9
1216
+ 1011.12
1217
+ 13
1218
+ 14
1219
+ 15
1220
+ # Ground truth paths1.0
1221
+ 0.8
1222
+ 0.6
1223
+ 0.4
1224
+ 0.2
1225
+ EUnitigs
1226
+ SafeEPC
1227
+ SafeFlow
1228
+ 0.0
1229
+ SafeMFD
1230
+ 3
1231
+ 4
1232
+ 5
1233
+ 6
1234
+ 7
1235
+ 8
1236
+ 9
1237
+ 1011.12
1238
+ 13
1239
+ 14
1240
+ 15
1241
+ # Ground truth paths1.0
1242
+ 0.8
1243
+ 0.6
1244
+ 0.4
1245
+ 0.2
1246
+ EUnitigs
1247
+ SafeEPC
1248
+ SafeFlow
1249
+ SafeMFD
1250
+ 0.0
1251
+ 3
1252
+ 4
1253
+ 5
1254
+ 6
1255
+ 7
1256
+ 8
1257
+ 9
1258
+ 10 11 12
1259
+ 13
1260
+ 14
1261
+ 15
1262
+ # Ground truth paths1.0
1263
+ 0.8
1264
+ 0.6
1265
+ 0.4
1266
+ 0.2
1267
+ EUnitigs
1268
+ SafeEPC
1269
+ SafeFlow
1270
+ 0.0
1271
+ SafeMFD
1272
+ 3
1273
+ 4
1274
+ 5
1275
+ 6
1276
+ 7
1277
+ 8
1278
+ 9
1279
+ 10.11.12
1280
+ 13
1281
+ 14
1282
+ 15
1283
+ # Ground truth paths1.0
1284
+ 0.8
1285
+ 0.6
1286
+ 0.4
1287
+ 0.2
1288
+ EUnitigs
1289
+ SafeEPC
1290
+ SafeFlow
1291
+ 0.0
1292
+ SafeMFD
1293
+ 3
1294
+ 4
1295
+ 5
1296
+ 6
1297
+ 7
1298
+ 8
1299
+ 9
1300
+ 1011.12
1301
+ 13
1302
+ 14
1303
+ 15
1304
+ # Ground truth pathsAlgorithm 3: An algorithm to output all maximal safe subpaths that can be extended from an
1305
+ extending core set E.
1306
+ Input: An ILP model M and an extending core set E
1307
+ Output: All maximal safe paths for M that extend some path from E
1308
+ 1 Procedure AllMaxSafe-BottomUp(M, E)
1309
+ 2
1310
+ L = {Pi[li − 1, ri] : Pi[li, ri] ∈ E, li > 1} R = {Pi[li, ri + 1] : Pi[li, ri] ∈ E, ri < |Pi|} P = L ∪ R S
1311
+ = GetSafe(M, P) for Pi[li, ri] ∈ E do
1312
+ 3
1313
+ if Pi[li − 1, ri] /∈ S and Pi[li, ri + 1] /∈ S then
1314
+ 4
1315
+ output Pi[li, ri]
1316
+ 5
1317
+ if S ̸= ∅ then
1318
+ 6
1319
+ AllMaxSafe-BottomUp(M, S)
1320
+ B.2
1321
+ The two-pointer algorithm
1322
+ As we observed in Section 2.2, we can test whether a single path P is safe using one ILP call. We will
1323
+ assume that this test is encapsulated as a procedure IsSafe(M, P). Once we can test whether a single path
1324
+ is safe for M(V, C), we can adopt a standard approach to compute all maximal safe paths. Namely, we start
1325
+ by computing one solution of M(V, C), P1, . . . , Pk and then compute maximal safe paths by a two-pointer
1326
+ technique that for each path Pi, finds all maximal safe paths by just a linear number of calls to the procedure
1327
+ IsSafe [26].
1328
+ This works as follows. We use two pointers, a left pointer L, and a right pointer R. Initially, L points to
1329
+ the first node of path Pi and R to the second node. As long as the subpath of Pi between L and R is safe,
1330
+ we move the right pointer to the next node on Pi. When this subpath is not safe, we output the subpath
1331
+ between L and the previous location of R as a maximal safe path, and we start moving the left pointer to
1332
+ the next node on Pi, until the subpath between L and R is safe. We stop the procedure once we reach the
1333
+ end of Pi. We summarize this procedure as Algorithm 4; see also Figure 6 for an example.
1334
+
1335
+ 1
1336
+ 8
1337
+ 3
1338
+ 2
1339
+ 5
1340
+ 6
1341
+ 4
1342
+ 7
1343
+
1344
+ s
1345
+ t
1346
+ b
1347
+ a
1348
+ d
1349
+ e
1350
+ c
1351
+ f
1352
+
1353
+ s
1354
+ t
1355
+ b
1356
+ a
1357
+ d
1358
+ e
1359
+ c
1360
+ f
1361
+
1362
+ s
1363
+ t
1364
+ b
1365
+ a
1366
+ d
1367
+ e
1368
+ c
1369
+ f
1370
+ xsai + xabi + xbci + xcdi + xdfi ≤ 4
1371
+ xsai + xabi + xbci + xcdi ≤ 3
1372
+ xabi + xbci + xcdi + xdfi ≤ 3
1373
+ R
1374
+ L
1375
+ R
1376
+ ∀i ∈ {1,…, k}
1377
+ ∀i ∈ {1,…, k}
1378
+ ∀i ∈ {1,…, k}
1379
+ L
1380
+ L
1381
+ R
1382
+ (a) Current iteration
1383
+
1384
+ 1
1385
+ 8
1386
+ 3
1387
+ 2
1388
+ 5
1389
+ 6
1390
+ 4
1391
+ 7
1392
+
1393
+ s
1394
+ t
1395
+ b
1396
+ a
1397
+ d
1398
+ e
1399
+ c
1400
+ f
1401
+
1402
+ s
1403
+ t
1404
+ b
1405
+ a
1406
+ d
1407
+ e
1408
+ c
1409
+ f
1410
+
1411
+ s
1412
+ t
1413
+ b
1414
+ a
1415
+ d
1416
+ e
1417
+ c
1418
+ f
1419
+ xsai + xabi + xbci + xcdi + xdfi ≤ 4
1420
+ xsai + xabi + xbci + xcdi ≤ 3
1421
+ xabi + xbci + xcdi + xdfi ≤ 3
1422
+ R
1423
+ L
1424
+ R
1425
+ ∀i ∈ {1,…, k}
1426
+ ∀i ∈ {1,…, k}
1427
+ ∀i ∈ {1,…, k}
1428
+ L
1429
+ L
1430
+ R
1431
+ (b) Right pointer movement
1432
+
1433
+ 1
1434
+ 8
1435
+ 3
1436
+ 2
1437
+ 5
1438
+ 6
1439
+ 4
1440
+ 7
1441
+
1442
+ s
1443
+ t
1444
+ b
1445
+ a
1446
+ d
1447
+ e
1448
+ c
1449
+ f
1450
+
1451
+ s
1452
+ t
1453
+ b
1454
+ a
1455
+ d
1456
+ e
1457
+ c
1458
+ f
1459
+
1460
+ s
1461
+ t
1462
+ b
1463
+ a
1464
+ d
1465
+ e
1466
+ c
1467
+ f
1468
+ xsai + xabi + xbci + xcdi + xdfi ≤ 4
1469
+ xsai + xabi + xbci + xcdi ≤ 3
1470
+ xabi + xbci + xcdi + xdfi ≤ 3
1471
+ R
1472
+ L
1473
+ R
1474
+ ∀i ∈ {1,…, k}
1475
+ ∀i ∈ {1,…, k}
1476
+ ∀i ∈ {1,…, k}
1477
+ L
1478
+ L
1479
+ R
1480
+ (c) Left pointer movement
1481
+ Fig. 6: Illustration of the two-pointer algorithm applied on a flow decomposition path Pi (in orange). In each
1482
+ sub-figure, the subpath P (dashed green) between the nodes pointed by the left pointer L and the right
1483
+ pointer R is tested for safety, by adding constraints S(P). In (a), IsSafe(M, P) returns True, and the right
1484
+ pointer advances on Pi. In (b), IsSafe(M, P) returns False, and the previous subpath from (a) is output as
1485
+ a maximal safe path. In (c), the left pointer has advanced, and the new path P is tested for safety.
1486
+ 15
1487
+
1488
+ Algorithm 4: The two-pointer algorithm applied to compute all maximal subpaths of a given
1489
+ solution path Pi
1490
+ Input: An ILP model M and one of its k solution paths, Pi = (v1, . . . , vt), t ≥ 2
1491
+ Output: All maximal safe subpaths of Pi for M
1492
+ 1 Procedure AllMaxSafe-TwoPointer(M, Pi)
1493
+ 2
1494
+ L ← 1, R ← 2 while True do
1495
+ 3
1496
+ while IsSafe(M, Pi[L, R]) and R ≤ t do
1497
+ 4
1498
+ R ← R + 1
1499
+ 5
1500
+ output Pi[L, R − 1] if R > t then return;
1501
+ 6
1502
+ while not IsSafe(M, Pi[L, R]) do
1503
+ 7
1504
+ L ← L + 1
1505
+ Dataset (# Graphs)
1506
+ Variant
1507
+ Time (hh:mm:ss) # ILP calls
1508
+ Catfish
1509
+ (27,613)
1510
+ TopDown
1511
+ 01:13:27
1512
+ 124,676
1513
+ BottomUp
1514
+ 03:22:13
1515
+ 212,774
1516
+ TwoPointer
1517
+ 04:21:44
1518
+ 226,365
1519
+ TwoPointerBin
1520
+ 03:31:57
1521
+ 216,540
1522
+ RefSim
1523
+ (5,808)
1524
+ TopDown
1525
+ 04:38:41
1526
+ 55,450
1527
+ BottomUp
1528
+ 11:55:20
1529
+ 76,837
1530
+ TwoPointer
1531
+ 13:48:00
1532
+ 127,352
1533
+ TwoPointerBin
1534
+ 11:34:02
1535
+ 119,218
1536
+ Table 2: Running times and number of ILP calls in four different variants of SafeMFD.
1537
+ B.3
1538
+ Running time experiments among different variants proposed
1539
+ We conducted the experiments on an isolated Linux server with AMD Ryzen Threadripper PRO 3975WX
1540
+ CPU with 32 cores (64 virtual) and 504GB of RAM. Time and peak memory usage of each program were
1541
+ measured with the GNU time command. SafeMFD was allowed to run Gurobi with 12 threads. All C++
1542
+ implementations were compiled with optimization level 3 (-O3 flag). Running time and peak memory is
1543
+ computed and reported per dataset.
1544
+ SafeMFD includes the following four variants computing maximal safe paths:
1545
+ TopDown : Implements Algorithm 2 using the group testing in Algorithm 1.
1546
+ BottomUp : Implements Algorithm 3 (Appendix B.1) using the group testing in Algorithm 1.
1547
+ TwoPointer : Implements Algorithm 4 (Appendix B.2), the traditional two-pointer algorithm [26].
1548
+ TwoPointerBin : Same as previous variant, but it additionally replaces the linear scan employed to extend
1549
+ and reduce the currently processed safe path by a binary search7.
1550
+ To compare between our four different variants we first run them all on every dataset, and then filter
1551
+ out those graphs that ran out of time in some variant. This way we ensure that no variant consumes its
1552
+ time budget and thus our running time measurements are not skewed by the unsuccessful inputs’ timeouts.
1553
+ Applying this filter we removed 83 graphs from the Catfish dataset (0.3%) and 4,515 graphs from the RefSim
1554
+ dataset (43.74%).
1555
+ Table 2 shows the running times and number of ILP calls of the different variants on both datasets.
1556
+ TopDown clearly outperforms the rest, being at least twice as fast, and performing (roughly) half many ILP
1557
+ calls. While BottomUp is analogous to TopDown, the superiority of the latter can be explained by the length
1558
+ maximal safe paths. Since maximal safe paths are long it is faster to obtain them by reducing unsafe paths
1559
+ 7 The binary search is only applied if the search space is larger than a constant threshold set experimentally.
1560
+ 16
1561
+
1562
+ (TopDown) than by extending safe paths (BottomUp and both TwoPointer variants). On the other hand,
1563
+ TwoPointer is the slowest variant and BottomUp and TwoPointerBin obtain similar improvements (over
1564
+ TwoPointer) by following different strategies. While BottomUp reduces the number of ILP calls more than
1565
+ TwoPointerBin (better appreciated in the RefSim dataset), the ILP calls of BottomUp take longer (since
1566
+ BottomUp tests several paths at the same time and TwoPointerBin only one), and thus the total running
1567
+ times of both is similar. This motivates future work on combining both approaches, while processing the
1568
+ paths starting from unsafe (as in TopDown) for better performance.
1569
+ C
1570
+ Hardness of Testing MFD Safety
1571
+ In this section we give a Turing-reduction from the UNIQUE 3SAT problem (U3SAT) to the problem of
1572
+ determining if a given path P in a flow network G is safe for minimum flow decomposition (call this problem
1573
+ MFD-SAFETY ). A 3SAT instance belongs to U3SAT if and only if it has exactly one satisfying assignment.
1574
+ U3SAT has been shown to be NP-hard under randomized reductions [52], but it is open as to whether it is
1575
+ NP-hard in general.
1576
+ The reduction leverages the construction in [22] that reduces 3SAT to minimum flow decomposition. We
1577
+ first briefly review this construction: A variable gadget (see Fig. 4 in [22]) is created for each 3SAT variable x
1578
+ and a clause gadget (see Fig. 5 in [22]) is created for each 3SAT clause. Positive literals in each clause receive
1579
+ flow from the left side of the corresponding variable gadget, whereas negative literals receive flow from the
1580
+ right side. Theorem VI.1 in [22] establishes that a 3SAT instance is satisfiable if and only if the constructed
1581
+ flow network has a minimum flow decomposition of a certain size. Any flow decomposition achieving this
1582
+ size must have a specific structure; in particular, there must be a flow path of weight 4 that either travels
1583
+ up the left side of the gadget (setting x to TRUE), or the right side (setting x to FALSE).
1584
+
1585
+ 4
1586
+ 4
1587
+ 4
1588
+ 4
1589
+ 4
1590
+ 4
1591
+
1592
+ 4
1593
+ 4
1594
+ 4
1595
+ 4
1596
+ t(x)
1597
+ s(x)
1598
+ Fig. 7: The variable gadget from [22], showing only the weight 4 edges (other edges have weights from
1599
+ {1, 2}). A key property established in [22] is that if the 3SAT instance is satisfiable then in a minimum flow
1600
+ decomposition, a weight 4 flow path must travel up either the left side of the gadget (as shown), or the
1601
+ right side. A left flow path indicates the variable should be set to TRUE, while right indicates FALSE. We
1602
+ leverage this construction to reduce U3SAT to MFD-SAFETY.
1603
+ Theorem 1. There is a polynomial time Turing-reduction from U3SAT to MFD-SAFETY.
1604
+ 17
1605
+
1606
+ Proof. To obtain the desired Turing-reduction algorithm, instead of checking the size of the MFD, we will
1607
+ instead sequentially check the MFD-SAFETY of the aforementioned side paths traveling up the left and
1608
+ right sides of each variable gadget. Provided each variable gadget has exactly one safe side path we can then
1609
+ check the corresponding truth assignment to see if each clause is satisfied. If yes, we accept the instance as
1610
+ belonging to U3SAT, otherwise we reject.
1611
+ Suppose the instance does belong to U3SAT. In this case there is a satisfying assignment so the MFD
1612
+ must have the structure as described above. Furthermore, since there is exactly one satisfying assignment,
1613
+ exactly one side path of each variable gadget must be safe and so our algorithm finds it and then verifies that
1614
+ the truth assignment satisfies each clause, thus accepting the instance. On the other hand, if the instance does
1615
+ not belong to U3SAT it could either be unsatisfiable or have multiple satisfying assignments. If unsatisfiable,
1616
+ no matter whether the safety checks pass, the corresponding assignment will not satisfy all clauses, so the
1617
+ instance will be rejected. If there are multiple solutions, then any variable that can be both TRUE and
1618
+ FALSE will not have a safe side path in the MFD. This means the safety check will fail and the instance
1619
+ will again be rejected.
1620
+ ⊓⊔
1621
+ 18
1622
+
3NFQT4oBgHgl3EQfGjVY/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
49E1T4oBgHgl3EQf6QXo/content/2301.03522v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80fb761f474b7bc057b81daac9459b1d167947c304674b262e0682e0faf75eec
3
+ size 133493
49E1T4oBgHgl3EQf6QXo/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d163f32138374882a64edb4aeee2bbfb9ab874421ef743549cd04feca51e68ae
3
+ size 1507373
49E1T4oBgHgl3EQf6QXo/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f5ebaa8f958ebcfeeb58ae364139552b55ce2fec80bb60c92c237f249c8dba3
3
+ size 65982
4NAyT4oBgHgl3EQfcPdw/content/tmp_files/2301.00278v1.pdf.txt ADDED
@@ -0,0 +1,685 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00278v1 [math.CO] 31 Dec 2022
2
+ Isometric path antichain covers: beyond hyperbolic graphs∗
3
+ Dibyayan Chakraborty†
4
+ Florent Foucaud‡
5
+ January 3, 2023
6
+ Abstract
7
+ The isometric path antichain cover number of a graph G, denoted by ipacc (G), is a graph pa-
8
+ rameter that was recently introduced to provide a constant factor approximation algorithm for Iso-
9
+ metric Path Cover, whose objective is to cover all vertices of a graph with a minimum number
10
+ of isometric paths (i.e. shortest paths between their end-vertices). This parameter was previously
11
+ shown to be bounded for chordal graphs and, more generally, for graphs of bounded chordality and
12
+ bounded treelength. In this paper, we show that the isometric path antichain cover number remains
13
+ bounded for graphs in three seemingly unrelated graph classes, namely, hyperbolic graphs, (theta,
14
+ prism, pyramid)-free graphs, and outerstring graphs. Hyperbolic graphs are extensively studied in
15
+ Metric Graph Theory. The class of (theta, prism, pyramid)-free graphs are extensively studied in
16
+ Structural Graph Theory, e.g. in the context of the Strong Perfect Graph Theorem. The class of
17
+ outerstring graphs is studied in Geometric Graph Theory and Computational Geometry. Our results
18
+ imply a constant factor approximation algorithm for Isometric Path Cover on all the above graph
19
+ classes. Our results also show that the distance functions of these (structurally) different graph classes
20
+ are more similar than previously thought.
21
+ 1
22
+ Introduction
23
+ A path is isometric if it is a shortest path between its endpoints. An isometric path cover of a graph G
24
+ is a set of isometric paths such that each vertex of G belongs to at least one of the paths. The isometric
25
+ path number of G is the smallest size of an isometric path cover of G. Given a graph G and an integer k,
26
+ the objective of the algorithmic problem Isometric Path Cover is to decide if there exists an isometric
27
+ path cover of cardinality at most k. Isometric Path Cover has been introduced and studied in the
28
+ context of pursuit-evasion games [1, 2] and used in the context of Product Structure Theorems [15].
29
+ The goal of this paper is to continue the study of approximation algorithms for Isometric Path
30
+ Cover on several graph classes.
31
+ We do so by continuing the study of a recently introduced graph
32
+ parameter which seems interesting in its own right, as it encapsulates several previously unrelated graph
33
+ classes.
34
+ Isometric Path Cover has also been studied from a structural point of view: the cardinalities
35
+ of the optimal solution have been determined for square grids [17], hypercubes [18], complete r-partite
36
+ graphs [24] and Cartesian products of complete graphs [24], and it was recently proved that the pathwidth
37
+ of a graph is always upper-bounded by the size of its smallest isometric path cover [16]. However, until
38
+ recently the algorithmic aspects of Isometric Path Cover remained unexplored. The problem is easy
39
+ to solve on trees and more generally, on block graphs [23] but remains hard on chordal graph, i.e. graphs
40
+ without any induced cycle of length at least 4 [7]. It can be approximated in polynomial time within a
41
+ factor of log(d) for graphs of diameter d by a greedy algorithm [27] and solved in polynomial time for every
42
+ ∗This
43
+ research
44
+ was
45
+ partially
46
+ financed
47
+ by
48
+ the
49
+ IFCAM
50
+ project
51
+ “Applications
52
+ of
53
+ graph
54
+ homomorphisms”
55
+ (MA/IFCAM/18/39), the ANR project GRALMECO (ANR-21-CE48-0004) and the French government IDEX-ISITE ini-
56
+ tiative 16-IDEX-0001 (CAP 20-25).
57
+ †Univ Lyon, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP UMR5668, France
58
+ ‡Université Clermont-Auvergne, CNRS, Mines de Saint-Étienne, Clermont-Auvergne-INP, LIMOS, 63000 Clermont-
59
+ Ferrand, France
60
+ 1
61
+
62
+ fixed value of k by an XP algorithm [16]. In a quest to find constant factor approximation algorithms
63
+ for Isometric Path Cover, Chakraborty et al. [7] introduced a parameter called the isometric path
64
+ antichain cover number of graphs, denoted by ipacc (G) (see Section 2 for a definition) and proved a
65
+ result directly implying the following (see [7, Proposition 10]).
66
+ Proposition 1 ([7]). For a graph G, if ipacc (G) ≤ c, then Isometric Path Cover admits a polynomial-
67
+ time c-approximation algorithm on G.
68
+ Proposition 1 is proved by a simple approximation algorithm described as follows. For each vertex r
69
+ of the graph, perform a Breadth-First Search at this vertex. Remove edges joining any vertices at the
70
+ same distance from r, and orient all edges towards r. The resulting directed acyclic graph can be seen as
71
+ the Hasse diagram of a poset. Compute a chain covering of that poset using classic methods related to
72
+ Dilworth’s theorem. The chains are the isometric paths of the solution. Keep the smallest of all solutions
73
+ over all choices of r.
74
+ Using Proposition 1, the above algorithm was shown to be a constant factor approximation algo-
75
+ rithm for many graph classes, including interval graphs, chordal graphs, and more generally, graphs with
76
+ bounded treelength. Indeed, on all these graph classes, the isometric path antichain cover number is
77
+ shown to be bounded by a constant (note that one does not need to compute this parameter for the
78
+ algorithm to function: it serves only in the analysis of the approximation ratio of the algorithm). As
79
+ noted in [7], this parameter may be unbounded on general graphs, for example for the class of hypercubes
80
+ or square grids.
81
+ In this paper, we continue to study the boundedness of the isometric path antichain cover number
82
+ of various graph classes. Specifically, we consider three structurally unrelated graph classes, namely,
83
+ hyperbolic graphs, (theta, prism, pyramid)-free graphs, and outerstring graphs, which extends the above
84
+ work to strictly larger graph classes.
85
+ Hyperbolic graphs: A graph G is said to be δ-hyperbolic [19] if for any four vertices u, v, x, y, the two
86
+ larger of the three distance sums d (u, v)+d (x, y), d (u, x)+d (v, y) and d (u, y)+d (v, x) differ by at most
87
+ 2δ. A graph class G is hyperbolic if there exists a constant δ such that every graph G ∈ G is δ-hyperbolic.
88
+ This parameter was first introduced by Gromov in the context of automatic groups [19] in relation with
89
+ their Cayley graphs. The hyperbolicity of a tree is 0, and in general, hyperbolicity seems to measure
90
+ how much the distance function of a graph deviates from a tree metric. Many structurally defined graph
91
+ classes like chordal graphs, cocomparability graphs, asteroidal-triple free graphs, graphs with bounded
92
+ chordality or treelength are hyperbolic graphs [8, 21]. Moreover, hyperbolicity has been found to capture
93
+ important properties of several large practical graphs such as the Internet [26] or database relations [31].
94
+ Due to its importance in discrete mathematics, algorithms, metric graph theory, researchers have studied
95
+ various algorithmic aspects of hyperbolic graphs [8, 12, 9, 13]. Note that graphs with diameter 2 are
96
+ hyperbolic, which may contain any graph as an induced subgraph.
97
+ (theta, prism, pyramid)-free graphs: A theta is a graph made of three vertex-disjoint induced paths
98
+ P1 = a . . . b, P2 = a . . . b, P3 = a . . . b of lengths at least 2, and such that no edges exist between the paths
99
+ except the three edges incident to a and the three edges incident to b. See Figure 2 for an illustration. A
100
+ pyramid is a graph made of three induced paths P1 = a . . . b1, P2 = a . . . b2, P3 = a . . . b3, two of which
101
+ have lengths at least 2, vertex-disjoint except at a, and such that b1b2b3 is a triangle and no edges exist
102
+ between the paths except those of the triangle and the three edges incident to a. A prism is a graph
103
+ made of three vertex-disjoint induced paths P1 = a1 . . . b1, P2 = a2 . . . b2, P3 = a3 . . . b3 of lengths at
104
+ least 1, such that a1a2a3 and b1b2b3 are triangles and no edges exist between the paths except those of
105
+ the two triangles. A graph G is (theta, pyramid, prism)-free if G does not contain any induced subgraph
106
+ isomorphic to a theta, pyramid or prism. A graph is a 3-path configuration if it is a theta, pyramid or
107
+ prism. The study of 3-path configurations dates back to the works of Watkins and Meisner [32] in 1967
108
+ and plays “special roles” in the proof of the celebrated Strong Perfect Graph Theorem [10, 14, 28, 30].
109
+ Important graph classes like chordal graphs, circular arc graphs, universally-signable graphs [11] exclude
110
+ all 3-path configurations. Popular graph classes like perfect graphs, even hole-free graphs exclude some
111
+ 2
112
+
113
+ Bounded isometric path
114
+ antichain cover number
115
+ bounded hyperbolicity *
116
+ (t-theta, t-prism, t-pyramid)-
117
+ free *
118
+ Outerstring *
119
+ circle *
120
+ (theta,prism,pyramid)-
121
+ free *
122
+ Universally signable *
123
+ bounded tree-length
124
+ bounded chordality
125
+ bounded diameter
126
+ chordal
127
+ AT-free
128
+ Interval
129
+ circular arc *
130
+ Permutation
131
+ Figure 1: Inclusion diagram for graph classes discussed here (and related ones). If a class A has an
132
+ upward path to class B, then A is included in B. For graphs in the gray classes, the complexity of
133
+ Isometric Path Cover is open; for all other graph classes, it is NP-complete. For all shown graph
134
+ classes, Isometric Path Cover is constant-factor approximable in polynomial time. Constant factor
135
+ approximation algorithms for Isometric Path Cover on graph classes marked with * are contributions
136
+ of this paper.
137
+ b
138
+ a
139
+ b1
140
+ b2
141
+ b3
142
+ a
143
+ b1
144
+ b2
145
+ b3
146
+ a1
147
+ a2
148
+ a3
149
+ (a)
150
+ (b)
151
+ (c)
152
+ (d)
153
+ Figure 2: (a) Theta, (b) Pyramid, (c) Prism, (d) Outerstrings. The figure shows that the graph K2,3,
154
+ which is also a theta, is an outerstring graph.
155
+ of the 3-path configurations. Note that, (theta, prism, pyramid)-free graphs are not hyperbolic. To see
156
+ this, consider a cycle C of order n. Clearly, C excludes all 3-path configurations and has hyperbolicity
157
+ Ω(n).
158
+ Outerstring graphs: A set S of simple curves on the plane is grounded if there exists a horizontal line
159
+ containing one endpoint of each of the curves in S. A graph G is an outerstring graph if there is a collection
160
+ C of grounded simple curves and a bijection between V (G) and C such that two curves in S if and only
161
+ if the corresponding vertices are adjacent in G. See Figure 2(d) for an illustration. The term “outerstring
162
+ graph” was first used in the early 90’s [22] in the context of studying intersection graphs of simple curves
163
+ on the plane. Many well-known graph classes like chordal graphs, circular arc graphs, circle graphs
164
+ (intersection graphs of chords of a circle), or cocomparability graphs are also outerstring graphs and
165
+ thus, motivated researchers from the geometric graph theory and computational geometry communities
166
+ to study algorithmic and structural aspects of outerstring graphs and its subclasses [4, 5, 6, 20, 25]. Note
167
+ that, in general, outerstring graphs may contain a prism, pyramid or theta as an induced subgraph.
168
+ Moreover, cycles of arbitrary order are outerstring graphs, implying that outerstring graphs are not
169
+ hyperbolic.
170
+ It is clear from the above discussion that the classes of hyperbolic graphs, (theta, prism, pyramid)-free
171
+ 3
172
+
173
+ graphs, and outerstring graphs are pairwise incomparable (with respect to the containment relationship).
174
+ 1.1
175
+ Our contributions
176
+ The main contribution of this paper is to show that the isometric path antichain cover number (see
177
+ Section 2 for a definition) remains bounded on hyperbolic graphs, (theta, pyramid, prism)-free graphs,
178
+ and outerstring graphs. Specifically, we prove the following theorems.
179
+ Theorem 2. Let G be a graph with hyperbolicity δ. Then, ipacc (G) ≤ 12δ + 6.
180
+ Theorem 3. Let G be a (theta, pyramid, prism)-free graph. Then, ipacc (G) ≤ 71.
181
+ Theorem 4. Let G be an outerstring graph. Then, ipacc (G) ≤ 95.
182
+ To the best of our knowledge, the isometric path antichain cover number being bounded (by con-
183
+ stant(s)) is the only known non-trivial property shared by any two or all three of these graph classes.
184
+ To provide a unified proof of Theorem 3 and 4, we study a more general graph class called (t-theta,
185
+ t-pyramid, t-prism)-free graphs [29] (see Section 4 for definition). When t = 1, (t-theta, t-pyramid, t-
186
+ prism)-free graphs are exactly (theta, prism, pyramid)-free graphs. Moreover, we show that all outerstring
187
+ graphs are (4-theta, 4-pyramid, 4-prism)-free graphs (Lemma 16). We prove the following.
188
+ Theorem 5. For t ≥ 1, let G be a (t-theta, t-pyramid, t-prism)-free graph. Then ipacc (G) ≤ 8t + 63.
189
+ Due to Proposition 1 and the above theorems, we also have the following corollary.
190
+ Corollary 6. There is an approximation algorithm for Isometric Path Cover with approximation
191
+ ratio
192
+ (a) 12δ + 6 on δ-hyperbolic graphs,
193
+ (b) 73 on (theta, prism, pyramid)-free graphs, and
194
+ (c) 95 on outerstring graphs.
195
+ (d) 8t + 63 on (t-theta, t-pyramid, t-prism)-free graphs.
196
+ Organisation: In Section 2, we introduce the recall some definitions and some results. In Section 3 we
197
+ prove Theorem 2. In Section 4, we prove Theorems 3 and 5. In Section 5, we prove Theorem 4. We
198
+ conclude in Section 6.
199
+ 2
200
+ Definitions and preliminary observations
201
+ In this section, we formally recall the definition of isometric path antichain cover number of graphs from
202
+ [7] and some related observations. A sequence of distinct vertices forms a path P if any two consecutive
203
+ vertices are adjacent. Whenever we fix a path P of G, we shall refer to the subgraph formed by the
204
+ edges between the consecutive vertices of P. The length of a path P, denoted by |P|, is the number of
205
+ its vertices minus one. A path is induced if there are no graph edges joining non-consecutive vertices.
206
+ In a directed graph, a directed path is a path in which all arcs are oriented in the same direction. For a
207
+ path P of a graph G between two vertices u and v, the vertices V (P) \ {u, v} are internal vertices of P.
208
+ A path between two vertices u and v is called a (u, v)-path. Similarly, we have the notions of isometric
209
+ (u, v)-path and induced (u, v)-path. For a vertex r of G and a set S of vertices of G, the distance of S from
210
+ r, denoted as d (r, S), is the minimum of the distance between any vertex of S and r. For a subgraph
211
+ H of G, the distance of H w.r.t. r is d (r, V (H)). Formally, we have d (r, S) = min{d (r, v) : v ∈ S} and
212
+ d (r, H) = d (r, V (H)).
213
+ 4
214
+
215
+ For a graph G and a vertex r ∈ V (G), consider the following operations on G. First, remove all
216
+ edges xy from G such that d (r, x) = d (r, y).
217
+ Let G′
218
+ r be the resulting graph.
219
+ Then, for each edge
220
+ e = xy ∈ E(G′
221
+ r) with d (r, x) = d (r, y) − 1, orient e from y to x. Let −→
222
+ Gr be the directed acyclic graph
223
+ formed after applying the above operation on G′. Note that this digraph can easily be computed in linear
224
+ time using a Breadth-First Search (BFS) traversal with starting vertex r.
225
+ The following definition is inspired by the terminology of posets (as the graph −→
226
+ Gr can be seen as the
227
+ Hasse diagram of a poset).
228
+ Definition 7. For a graph G and a vertex r ∈ V (G), two vertices x, y ∈ V (G) are antichain vertices if
229
+ there are no directed paths from x to y or from y to x in −→
230
+ Gr. A set X of vertices of G is an antichain
231
+ set if any two vertices in X are antichain vertices. The cardinality of the largest antichain set in −→
232
+ Gr will
233
+ be denoted by β
234
+ �−→
235
+ Gr
236
+
237
+ . The cardinality of the largest antichain set of G, is defined as
238
+ β (G) = min
239
+
240
+ β
241
+ �−→
242
+ Gr
243
+
244
+ : r ∈ V (G)
245
+
246
+ Definition 8 ([7]). Let r be a vertex of a graph G. For a subgraph H, Ar (H) shall denote the maximum
247
+ antichain set of H in −→
248
+ Gr. The isometric path antichain cover number of −→
249
+ Gr, denoted by ipacc
250
+ �−→
251
+ Gr
252
+
253
+ , is
254
+ defined as follows:
255
+ ipacc
256
+ �−→
257
+ Gr
258
+
259
+ = max {|Ar (P) |: P is an isometric path}
260
+ The isometric path antichain cover number of graph G, denoted as ipacc (G), is defined as the minimum
261
+ over all possible antichain covers of its associated directed acyclic graphs:
262
+ ipacc (G) = min
263
+
264
+ ipacc
265
+ �−→
266
+ Gr
267
+
268
+ : r ∈ V (G)
269
+
270
+ We recall the proof of the following proposition from [7] which will be used heavily in this paper.
271
+ Proposition 9 ([7]). Let G be a graph and r, an arbitrary vertex of G. Consider the directed acyclic
272
+ graph −→
273
+ Gr, and let P be an isometric path between two vertices x and y in G. Then |P| ≥ |d (r, x) −
274
+ d (r, y) | + |Ar (P) | − 1.
275
+ Proof. Orient the edges of P from y to x in G. First, observe that P must contain a set E1 of oriented
276
+ edges such that |E1| = |d (r, y) − d (r, x) | and for any −→
277
+ ab ∈ E1, d (r, a) = d (r, b) + 1. Let the vertices of
278
+ the largest antichain set of P in −→
279
+ Gr, i.e., Ar (P), be ordered as a1, a2, . . . , at according to their occurrence
280
+ while traversing P from y to x. For i ∈ [2, t], let Pi be the subpath of P between ai−1 and ai. Observe that
281
+ for any i ∈ [2, t], since ai and ai−1 are antichain vertices, there must exist an oriented edge −→
282
+ bici ∈ E(Pi)
283
+ such that either d (r, bi) = d (r, ci) or d (r, bi) = d (r, ci) − 1.
284
+ Let E2 = {bici}i∈[2,t].
285
+ Observe that
286
+ E1 ∩ E2 = ∅ and therefore |P| ≥ |E1| + |E2| = |d (r, y) − d (r, x) | + |Ar (P) | − 1.
287
+ 3
288
+ Proof of Theorem 2
289
+ In this section, we shall show that isometric path antichain cover number of graphs with hyperbolicity
290
+ at most δ is at most 12δ + 6. To achieve our goal we need to recall a few definitions from the literature.
291
+ For three vertices x, y, z of a graph G, a geodesic triangle [3], denoted as ∆(x, y, z) is the union P(x, y) ∪
292
+ P(y, z)∪P(x, z) of three isometric paths connecting these vertices. A geodesic triangle ∆(x, y, z) is called
293
+ ρ-slim if for any vertex u ∈ P(x, y) the distance d (u, P(y, z) ∪ P(x, z)) is at most ρ. The smallest value
294
+ of ρ for which every geodesic triangle of G is ρ-slim is called the slimness of G and is denoted by sl (G).
295
+ In the following lemma, we shall show that if the isometric path antichain cover number of a graph is
296
+ large then so is the slimness of the graph.
297
+ Lemma 10. For any graph G, ipacc (G) ≤ 4sl (G) + 2.
298
+ 5
299
+
300
+ u
301
+ v
302
+ c
303
+ c′
304
+ Figure 3: An example of a 4-fat turtle. Let C be the cycle induced by the black vertices, P be the path
305
+ induced by the white vertices. Then the tuple (4, C, P, c, c′) defines a 4-fat turtle.
306
+ Proof. Let ρ = sl (G).
307
+ Aiming for a contradiction, let r be a vertex of G such that there exists an
308
+ isometric path P such that |Ar (P) | ≥ 4ρ + 3. Let the vertices of Ar (P) be named and ordered as
309
+ a1, a2, . . . , a2ρ+2, . . . , a4ρ+3 as they are encountered while traversing P from one end-vertex to the other.
310
+ Let x = a1, y = a4ρ+3. Let −→
311
+ Px be an oriented path from x to r in −→
312
+ Gr. Observe that Px, the path of
313
+ G obtained by removing the orientation of −→
314
+ Px, is an (x, r)-isometric path. Let −→
315
+ Py be an oriented path
316
+ from y to r in −→
317
+ Gr. Similarly, Py, the path of G obtained by removing the orientation of −→
318
+ Py, is an (y, r)-
319
+ isometric path. Observe that P, Px, Py form a geodesic triangle with x, r, y as end-vertices. Consider the
320
+ vertex z = a2ρ+2 on the path P. Since ρ = sl (G), there exists a vertex w ∈ V (Px) ∪ V (Py) such that
321
+ d (w, z) ≤ ρ. Without loss of generality, assume w ∈ V (Px). Then, d (x, z) ≤ d (x, w) + d (w, z). By
322
+ using that d (r, z) ≤ d (r, w) + d (w, z) ≤ d (r, w) + ρ, we get d (x, z) ≤ |d (r, x) − d (r, z) | + 2ρ. But this
323
+ contradicts Proposition 9, due to which we have d (x, z) ≥ |d (r, x) − d (r, z) | + 2ρ + 1.
324
+ Now we shall use the following result.
325
+ Proposition 11 ([3]). For any graph G, sl (G) ≤ 3hb (G).
326
+ Proposition 11 and Lemma 10, imply the theorem.
327
+ 4
328
+ Proofs of Theorem 3 and 5
329
+ In this section, we shall prove Theorems 3 and 5. First we shall define the notions of t-theta, t-prism,
330
+ and t-pyramid [29].
331
+ For an integer t ≥ 1, a t-prism is a graph made of three vertex-disjoint induced paths P1 = a1 . . . b1,
332
+ P2 = a2 . . . b2, P3 = a3 . . . b3 of lengths at least t, such that a1a2a3 and b1b2b3 are triangles and no edges
333
+ exist between the paths except those of the two triangles. For an integer t ≥ 1, a t-pyramid is a graph
334
+ made of three induced paths P1 = a . . . b1, P2 = a . . . b2, P3 = a . . . b3 of lengths at least t, two of which
335
+ have lengths at least t + 1, they are pairwise vertex-disjoint except at a, such that b1b2b3 is a triangle
336
+ and no edges exist between the paths except those of the triangle and the three edges incident to a. For
337
+ an integer t ≥ 1, a t-theta is a graph made of three internally vertex-disjoint induced paths P1 = a . . . b,
338
+ P2 = a . . . b, P3 = a . . . b of lengths at least t+1, and such that no edges exist between the paths except the
339
+ three edges incident to a and the three edges incident to b. A graph G is (t-theta, t-pyramid, t-prism)-free
340
+ if G does not contain any induced subgraph isomorphic to a t-theta, t-pyramid or t-prism. When t = 1,
341
+ (t-theta, t-pyramid, t-prism)-free graphs are exactly (theta, prism, pyramid)-free graphs.
342
+ Now we shall show that the isometric path antichain cover number of (t-theta, t-pyramid, t-prism)-
343
+ free graphs are bounded above by a linear function on t. We shall show that, when the isometric path
344
+ antichain cover number of a graph is large, the existence of a structure called “t-fat turtle” (defined
345
+ below) as an induced subgraph is forced, which, cannot be present in a ((t − 1)-theta, (t − 1)-pyramid,
346
+ (t − 1)-prism)-free graph.
347
+ Definition 12. For an integer t ≥ 1, a “t-fat turtle” consists of a cycle C and an induced (u, v)-path P
348
+ of length at least t such that all of the following hold.
349
+ 6
350
+
351
+ (a) V (P) ∩ V (C) = ∅,
352
+ (b) For any vertex w ∈ (V (P) \ {u, v}), N(w) ∩ V (C) = ∅ and both u and v have at least one neighbour
353
+ in C.
354
+ (c) For any vertex w ∈ N(u) ∩ V (C) and w′ ∈ N(v) ∩ V (C), the distance between w and w′ in C is at
355
+ least t,
356
+ (d) There exist two vertices {c, c′} ⊂ V (C) and two distinct components Cu, Cv of C − {c, c′} such that
357
+ N(u) ∩ V (C) ⊆ V (Cu) and N(v) ∩ V (C) ⊆ V (Cv).
358
+ The tuple (t, C, P, c, c′) defines the t-fat turtle. See Figure 3 for an example.
359
+ In the following observation, we show that any (t-theta, t-pyramid,t-prism)-free graph cannot contain
360
+ a (t + 1)-fat turtle as an induced subgraph.
361
+ Lemma 13. For some integer t ≥ 1, let G be a graph containing a (t + 1)-fat turtle as an induced
362
+ subgraph. Then G is not (t-theta, t-pyramid, t-prism)-free.
363
+ Proof. Let (t+1, C, P, c, c′) be a (t+1)-fat turtle in G. Let the vertices of C be named c = a0, a1, . . . , ak =
364
+ c′, ak+1, . . . , a|V (C)| as they are encountered while traversing C starting from c in a counter-clockwise
365
+ manner. Denote by u, v the end-vertices of P. By definition, there exist two distinct components Cu, Cv of
366
+ C−{c, c′} such that N(u)∩V (C) ⊆ V (Cu) and N(v)∩V (C) ⊆ V (Cv). Without loss of generality, assume
367
+ V (Cu) = {a1, a2, . . . , ak−1} and V (Cv) = {ak+1, ak+2, . . . , a|V (C)|}. Let i− and i+ be the minimum and
368
+ maximum indices such that ai− and ai+ are adjacent to u. Let j− and j+ be the minimum and maximum
369
+ indices such that aj− and aj+ are adjacent to v. By definition, i− ≤ i+ < j− ≤ j+. Let P1 be the
370
+ (ai−, aj+)-subpath of C containing c. Let P2 be the (ai+, aj−)-subpath of C that contains c′. Observe
371
+ that P1 and P2 have length at least t (by definition). Now we show that P, P1, P2 together form one of
372
+ theta, pyramid or prism. If ai− = ai+ and aj− = aj+, then P, P1, P2 form a t-theta. If i− ≤ i+ − 2 and
373
+ j− ≤ j+ − 2, then also P, P1, P2 form a t-theta. If j− = j+ − 1 and i− = i+ − 1, then P, P1, P2 form a
374
+ t-prism. In any other case, P, P1, P2 form a t-pyramid.
375
+ In the remainder of this section, we shall prove that there exists a linear function f(t) such that if
376
+ the isometric path antichain cover number of a graph is more than f(t), then G is forced to contain a
377
+ (t + 1)-fat turtle as an induced subgraph, and therefore is not (t-theta, t-pyramid,t-prism)-free. We shall
378
+ use the following observation.
379
+ Observation 14. Let G be a graph, r be an arbitrary vertex, P be an isometric (u, v)-path in G and Q
380
+ be a subpath of an isometric (v, r)-path in G such that one endpoint of Q is v. Let P ′ be the maximum
381
+ (u, w)-subpath of P such that no internal vertex of P ′ is a neighbour of some vertex of Q. We have that
382
+ |Ar (P ′) | ≥ |Ar (P) | − 3.
383
+ Proof. Suppose |Ar (P ′) | ≤ |Ar (P) | − 4 and consider the (w, v)-subpath, say P ′′, of P. Observe that
384
+ |Ar (P ′′) | ≥ 4. Now let w′ be a vertex of Q which is a neighbour of w. Observe that |d (r, w)−d (r, w′) | ≤ 1
385
+ and therefore d (w, v) = |E(P ′′)| ≤ |d (r, w)−d (r, v) |+2. But this contradicts Proposition 9, which implies
386
+ that the length of P ′′ is at least |d (r, w) − d (r, v) | + 3.
387
+ Lemma 15. For an integer t ≥ 1, let G be a graph with ipacc (G) ≥ 8t + 64. Then G has a (t + 1)-fat
388
+ turtle as an induced subgraph.
389
+ Proof. Let r be a vertex of G such that ipacc
390
+ �−→
391
+ Gr
392
+
393
+ is at least 8t+64. Then there exists an isometric path
394
+ P such that |Ar (P) | ≥ 8t+ 64. Let the two endpoints of P be a and b. (See Figure 4.) Let u be a vertex
395
+ of P such that d (r, u) = d (r, P). Let Pau be the (a, u)-subpath of P and Pbu be the (b, u)-subpath of P.
396
+ Both Pau and Pbu are isometric paths and observe that either |Ar (Pau) | ≥ 4t+32 or |Ar (Pbu) | ≥ 4t+32.
397
+ Without loss of generality, assume that |Ar (Pbu) | ≥ 4t + 32. Let Qr
398
+ b be an isometric (b, r)-path in G.
399
+ 7
400
+
401
+ r
402
+ z2
403
+ w2
404
+ u
405
+ z
406
+ z1
407
+ w
408
+ b
409
+ w1
410
+ c
411
+ (= a2t+13)
412
+ x
413
+ c1
414
+ a
415
+ c2
416
+ T (c1, c2)
417
+ ≥ t
418
+ ≥ t
419
+ ≥ t
420
+ Qr
421
+ b
422
+ Qr
423
+ u
424
+ Figure 4: Illustration of the notations used in the proof of Lemma 15.
425
+ Let Ruw be the maximum (u, w)-subpath, of Pbu such that no internal vertex of Ruw is a neighbour
426
+ of Qr
427
+ b. Note that Ruw is an isometric path and w has a neighbour in Qr
428
+ b. Applying Observation 14, we
429
+ have the following:
430
+ Claim 15.1. |Ar (Ruw) | ≥ 4t + 29.
431
+ Let Qr
432
+ u be any isometric (u, r)-path of G and let Rzw be the maximum (z, w)-subpath of Ruw such
433
+ that no internal vertex of Rzw has a neighbour in Qr
434
+ u. Observe that Rzw is an isometric path, and z has
435
+ a neighbour in Qr
436
+ u. Again applying Observation 14, we have the following:
437
+ Claim 15.2. |Ar (Rzw) | ≥ 4t + 26.
438
+ Let a1, a2, . . . , ak be the vertices of Ar (Rzw) ordered according to their appearance while traversing
439
+ Rzw from z to w. Due to Claim 15.2, we have that k ≥ 4t + 26. Let c = a2t+13 and Qr
440
+ c denote an
441
+ isometric (c, r)-path. Let T (r, c1) be the maximum subpath of Qr
442
+ c such that no internal vertex of T (r, c1)
443
+ is adjacent to any vertex of Rzw.
444
+ Claim 15.3. Let x be a neighbor of c1 in Rzw, X be the (x, b)-subpath of Pub and Y be the (x, u)-subpath
445
+ of Pub. Then |Ar (X) | ≥ 2t + 11 and |Ar (Y ) | ≥ 2t + 11.
446
+ Proof. Let Rcw denote the (c, w)-subpath of Rzw. Observe that |Ar (Rcw) | ≥ 2t + 14. First, consider
447
+ the case when x lies in the (z, c)-subpath of Rzw. In this case, Rcw is a subpath of X and therefore
448
+ |Ar (X) | ≥ 2t + 14. Now consider the case when x lies in Rcw. In this case, applying Observation 14,
449
+ we have that |Ar (X) | ≥ |Ar (Rcw) | − 3 ≥ 2t + 11. Using a similar argument, we have that |Ar (Y ) | ≥
450
+ 2t + 11.
451
+ Let T (c1, c2) be the maximum (c1, c2)-subpath of T (c1, r) such that no internal vertex of T (c1, c2) is
452
+ adjacent to a vertex of Qr
453
+ b or Qr
454
+ u. We have the following claim.
455
+ Claim 15.4. The length of T (c1, c2) is at least t + 3.
456
+ Proof. Assume that the length of T (c1, c2) is at most t + 2 and x be a neighbour of c1 in Rzw. Observe
457
+ that all vertices of Rzw are at distance at least d (r, u) i.e. d (r, Rzw) ≥ d (r, u), since d (r, u) = d (r, P).
458
+ Hence,
459
+ (+) d (r, x) ≥ d (r, u) and d (r, c1) ≥ d (r, u) − 1.
460
+ 8
461
+
462
+ Now, suppose c2 has a neighbor c3 in Qr
463
+ u. Hence d (c3, x) ≤ d (c3, c2) + d (c2, c1) + d (c1, x) ≤ t + 4.
464
+ Now, using (+) and the fact that c3 lies on an isometric (r, u)-path (Qr
465
+ u), we have that d (c3, u) ≤ t + 4.
466
+ Therefore, d (u, x) ≤ d (c3, u) + d (c3, x) ≤ 2t + 8. But this contradicts Proposition 9 and Claim 15.3, as
467
+ they together imply that d (u, x) is at least d (r, x) − d (r, u) + 2t + 10≥ 2t + 10.
468
+ Hence, c2 must have a neighbour c3 in Qr
469
+ b. First, assume that d (r, x) ≥ d (r, b). Then, as d (c3, x) ≤
470
+ d (c3, c2) + d (c2, c1) + d (c1, x) ≤ t + 4 and c3 lies on an isometric (r, b)-path (Qr
471
+ b), we have that d (x, b) ≤
472
+ 2t + 8. But again this contradicts Proposition 9 and Claim 15.3, as they together imply that the length
473
+ of d (x, b) is at least d (r, x) − d (r, u) + 2t + 10. Now, assume that d (r, x) < d (r, b). Let b′ be a vertex of
474
+ Qr
475
+ b such that d (r, b′) = d (r, x). Using a similar argumentation as before, we have that d (x, b′) ≤ 2t + 8.
476
+ Hence, d (x, b) ≤ d (x, b′) + d (b′, b) ≤ d (r, b) − d (r, x) + 2t + 8. But this contradicts Proposition 9 which,
477
+ due to Claim 15.3, implies that d (x, b) ≥ d (r, b) − d (r, x) + 2t + 10.
478
+ The path T (c1, c2) forms the first ingredient to extract a (t + 1)-fat turtle. Let z1 be the neighbor of
479
+ z in Qr
480
+ u and w1 be the neighbour of w in Qr
481
+ b. We have the following claim.
482
+ Claim 15.5. The vertices w1 and z1 are non adjacent.
483
+ Proof. Recall that z1 lies in Qr
484
+ u and d (r, z) ≥ d (r, u). Hence z1 must be a neighbor of u. If w1 and z1 are
485
+ adjacent, then observe that d (u, b) ≤ d (r, b) − d (r, w1) + 2 ≤. This implies d (u, b) ≤ d (r, b) − d (r, u)+ 3.
486
+ But this shall again contradict Proposition 9.
487
+ Now we shall construct a (w1, z1)-path as follows: Consider the maximum (w1, w2)-subpath, say
488
+ T (w1, w2), of Qr
489
+ b such that no internal vertex of T (w1, w2) has a neighbour in Qr
490
+ u. Similarly, consider the
491
+ maximum (z1, z2)-subpath, say T (z1, z2), of Qr
492
+ b such that no internal vertex of T (z1, z2) is a neighbor of
493
+ w2. Let T be the path obtained by taking the union of T (w1, w2) and T (z1, z2). Observe that z2 must
494
+ be a neighbour of w2 and T is an induced (w1, z1)-path. The definitions of T and Rzw imply that their
495
+ union induces a cycle Z. Here we have the second and final ingredient to extract the (t + 1)-fat turtle.
496
+ Suppose that c2 has a neighbour in T . Let T ′ be the maximum subpath of T (c1, c2) which is vertex-
497
+ disjoint from Z. Due to Claim 15.4, the length of T ′ is at least t + 1. Let e1 and e2 be the end-vertices
498
+ of T ′. Observe the following.
499
+ • Each of e1 and e2 has at least one neighbor in Z.
500
+ • Z −{z, w} contains two distinct components C1, C2 such that for i ∈ {1, 2}, N(ei)∩V (Z) ⊆ V (Ci).
501
+ • For a vertex e′
502
+ 1 ∈ N(e1) ∩ V (Z) and e′
503
+ 2 ∈ N(e2) ∩ V (Z), the distance between e′
504
+ 1 and e′
505
+ 2 is at least
506
+ t + 1. This statement follows from Claim 15.3.
507
+ Hence, we have that the tuple (t + 1, Z, T ′, z, w) defines a (t + 1)-fat turtle. Now consider the case
508
+ when c2 does not have a neighbor in T . By definition, c2 has at least one neighbor in Qr
509
+ u or Qr
510
+ b. Without
511
+ loss of generality, assume that c2 has a neighbor c3 in Qr
512
+ u such that the (z2, c3)-subpath, say, T ′′ of Qr
513
+ u
514
+ has no neighbor of c2 other than c3. Observe that the path T ∗ = (T ′ ∪ (T ′′ − {z2})) is vertex-disjoint
515
+ from Z and has length at least t + 1. Let e1, e2 be the two end-vertices of T ∗. Observe the following.
516
+ • Each of e1 and e2 has at least one neighbor in Z.
517
+ • Z −{z, w} contains two distinct components C1, C2 such that for i ∈ {1, 2}, N(ei)∩V (Z) ⊆ V (Ci).
518
+ • For a vertex e′
519
+ 1 ∈ N(e1) ∩ V (Z) and e′
520
+ 2 ∈ N(e2) ∩ V (Z), the distance between e′
521
+ 1 and e′
522
+ 2 is at least
523
+ t + 1. This statement follows from Claim 15.3.
524
+ Hence, (t + 1, Z, T ∗, z, w) is a (t + 1)-fat turtle
525
+ Proof of Theorem 5 and 3: Lemma 13 and 15 together imply the theorems.
526
+ 9
527
+
528
+ 5
529
+ Proof of Theorem 4
530
+ Next we shall show that outerstring graphs are (4-theta, 4-prism, 4-pyramid)-free.
531
+ Lemma 16. Let G be an outerstring graph. Then, G is (4-theta, 4-prism, 4-pyramid)-free.
532
+ Proof. To prove the lemma, we shall need to recall a few definitions and results from the literature. A
533
+ graph G is a string graph if there is a collection S of simple curves on the plane and a bijection between
534
+ V (G) and S such that two curves in S intersect if and only if the corresponding vertices are adjacent in
535
+ G. Let G be a graph with an edge e. The graph G \ e is obtained by contracting the edge e into a single
536
+ vertex. Observe that string graphs are closed under edge contraction [22]. We shall use the following
537
+ result.
538
+ Proposition 17 ([22]). Let G be an outerstring graph with an edge e. Then G\e is an outerstring graph.
539
+ A full subdivision of a graph means replacing each edge of G with a new path of length at least two.
540
+ We shall use the following result implied from Theorem 1 of [22].
541
+ Proposition 18 ([22]). Let G be a string graph. Then G does not contain a full subdivision of K3,3 as
542
+ an induced subgraph.
543
+ For a graph G, the graph G+ is constructed by introducing a new apex vertex a and connecting a
544
+ with all vertices of G by new copies of paths of length at least 2. We shall use the following result of
545
+ Biedl et al. [4].
546
+ Proposition 19 (Lemma 1, [4]). A graph G is an outerstring graph if and only if G+ is a string graph.
547
+ Now we are ready to prove the lemma.
548
+ Let G be an outerstring graph. Assume for the sake of
549
+ contradiction that G contains an induced subgraph H which is a 4-theta, 4-pyramid, or a 4-prism. Since
550
+ every induced subgraph of an outerstring graph is also an outerstring graph, we have that H is an
551
+ outerstring graph. Let E be the set of edges of H whose both endpoints are part of some triangle. Now
552
+ consider the graph H1 = H \ E which is obtained by contracting all edges in E. By Proposition 17, H1
553
+ is an outerstring graph and it is easy to check that H1 is a 3-theta. Let u and v be the vertices of H1
554
+ with degree 3 and w1, w2, w3 be the set of mutually non-adjacent vertices such that for each i ∈ {1, 2, 3}
555
+ d (u, wi) = 2 and d (v, wi) ≥ 2. Since H1 is a 3-theta, w1, w2, w3 exist. Now consider the graph H+
556
+ 1 and
557
+ a be the new apex vertex. Due to Proposition 19, we have that H+
558
+ 1 is a string graph. But notice that,
559
+ for each pair of vertices in {x, y} ⊂ {w1, w2, w3, u, v, a}, there exists a unique path of length at least 2
560
+ connecting x, y. This implies that H+
561
+ 1 (which is a string graph) contains a full subdivision of K3,3, which
562
+ contradicts Proposition 18.
563
+ Proof of Theorem 4: Lemma 16 and Theorem 5 together imply the theorem.
564
+ 6
565
+ Conclusion
566
+ In this paper, we derived upper bounds on the isometric path antichain cover number of three seemingly
567
+ (structurally) different classes of graphs, namely hyperbolic graphs, (theta,pyramid,prism)-free graphs
568
+ and outerstring graphs. We have not made any efforts in reducing the constants in our bounds. In
569
+ particular, we believe that a careful analysis of the structure of outerstring graphs would help in reducing
570
+ its isometric path antichain cover number. (Note that outerstring graphs may contain a theta, 2-pyramid
571
+ or a 2-prism). We note that the isometric path antichain cover number of a (n × n)-grid is Ω(n), which
572
+ implies that the isometric path antichain cover number of planar graphs (which are also string graphs)
573
+ is not bounded. Similarly, we note that the isometric path antichain cover number of G1, G2 and G3
574
+ are unbounded where G1 denotes the class of (theta, prism)-free graphs, G2 denotes the class of (prism,
575
+ pyramid)-free graphs and G3 denotes the class of (theta, pyramid)-free graphs. An interesting direction
576
+ 10
577
+
578
+ of research is to generalise the properties of hyperbolic graphs to graphs with bounded isometric path
579
+ antichain cover number.
580
+ We also note that recognizing graphs with a given value of isometric path antichain cover number
581
+ might be computationally hard. This problem does not seem to be in NP: to certify that a graph has
582
+ isometric path antichain cover number at most k, (intuitively) one would need to check, for all possible
583
+ isometric paths, that it does not contain any antichain of size k + 1 (with respect to all possible roots r).
584
+ On the contrary, it is in coNP: to certify that its isometric path antichain cover number is not at most k,
585
+ one may exhibit, for every possible root r, one isometric path and one antichain of size k + 1 contained
586
+ in the path. Checking the validity of this certificate can be done in polynomial time. We do not know if
587
+ the problem is coNP-hard. Nevertheless, this parameter seems interesting from a structural graph theory
588
+ point of view, since it encapsulates several seemingly unrelated graph classes with, as a consequence,
589
+ common algorithmic behaviours of these classes (recall that the value of the parameter does not need to
590
+ be computed for the approximation algorithm to work). Using our framework, perhaps other common
591
+ properties of these classes could be exhibited?
592
+ Our results imply a constant factor approximation algorithm for Isometric Path Cover on hyper-
593
+ bolic graphs, (theta, pyramid, prism)-free graphs and outerstring graphs. However, the existence of a
594
+ constant factor approximation algorithm for Isometric Path Cover on general graphs is not known
595
+ (it was observed that the algorithm from [7] also used here, can have non-constant approximation ratios,
596
+ for example on hypercube graphs, whose isometric path antichain cover numbers are unbounded).
597
+ Polynomial-time solvability of Isometric Path Cover on restricted graph classes like split graphs,
598
+ interval graphs, planar graphs etc. also remains unknown, see [7].
599
+ Acknowledgement: We thank Nicolas Trotignon for suggesting us to study the class of (t-theta, t-
600
+ pyramid, t-prism)-free graphs.
601
+ References
602
+ [1] I. Abraham, C. Gavoille, A. Gupta, O. Neiman, and K. Talwar. Cops, robbers, and threatening
603
+ skeletons: Padded decomposition for minor-free graphs. SIAM Journal on Computing, 48(3):1120–
604
+ 1145, 2019.
605
+ [2] M. Aigner and M. Fromme. A game of cops and robbers. Discrete Applied Mathematics, 8(1):1–12,
606
+ 1984.
607
+ [3] J. M. Alonso, T. Brady, D. Cooper, V. Ferlini, M. Lustig, M. Mihalik, M. Shapiro, and H. Short.
608
+ Notes on word hyperbolic groups. In Group theory from a geometrical viewpoint. 1991.
609
+ [4] T. Biedl, A. Biniaz, and M. Derka. On the size of outer-string representations. In 16th Scandinavian
610
+ Symposium and Workshops on Algorithm Theory (SWAT 2018), 2018.
611
+ [5] P. Bose, P. Carmi, J. M. Keil, A. Maheshwari, S. Mehrabi, D. Mondal, and M. Smid. Computing
612
+ maximum independent set on outerstring graphs and their relatives.
613
+ Computational Geometry,
614
+ 103:101852, 2022.
615
+ [6] J. Cardinal, S. Felsner, T. Miltzow, C. Tompkins, and Birgit Vogtenhuber. Intersection graphs of
616
+ rays and grounded segments. In International Workshop on Graph-Theoretic Concepts in Computer
617
+ Science, pages 153–166. Springer, 2017.
618
+ [7] D. Chakraborty, A. Dailly, S. Das, F. Foucaud, H. Gahlawat, and S. K. Ghosh. Complexity and
619
+ algorithms for ISOMETRIC PATH COVER on chordal graphs and beyond. In Proceedings of the
620
+ 33rd International Symposium on Algorithms and Computation, ISAAC., volume 248, pages 12:1–
621
+ 12:17, 2022.
622
+ 11
623
+
624
+ [8] V. Chepoi, F. Dragan, B. Estellon, M. Habib, and Y. Vaxès. Diameters, centers, and approximat-
625
+ ing trees of δ-hyperbolic geodesic spaces and graphs. In Proceedings of the twenty-fourth annual
626
+ symposium on Computational geometry, pages 59–68, 2008.
627
+ [9] V. Chepoi, F. F. Dragan, and Y. Vaxes. Core congestion is inherent in hyperbolic networks. In
628
+ Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages
629
+ 2264–2279. SIAM, 2017.
630
+ [10] Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas. The strong perfect graph
631
+ theorem. Annals of mathematics, pages 51–229, 2006.
632
+ [11] Michele Conforti, Gérard Cornuéjols, Ajai Kapoor, and Kristina Vušković.
633
+ Universally signable
634
+ graphs. Combinatorica, 17(1):67–77, 1997.
635
+ [12] D. Coudert, A. Nusser, and L. Viennot. Enumeration of far-apart pairs by decreasing distance for
636
+ faster hyperbolicity computation. arXiv preprint arXiv:2104.12523, 2021.
637
+ [13] B. Das Gupta, M. Karpinski, N. Mobasheri, and F. Yahyanejad. Effect of Gromov-hyperbolicity
638
+ parameter on cuts and expansions in graphs and some algorithmic implications.
639
+ Algorithmica,
640
+ 80(2):772–800, 2018.
641
+ [14] Émilie Diot, Marko Radovanović, Nicolas Trotignon, and Kristina Vušković. The (theta, wheel)-free
642
+ graphs Part I: only-prism and only-pyramid graphs. Journal of Combinatorial Theory, Series B,
643
+ 143:123–147, 2020.
644
+ [15] V. Dujmović, G. Joret, P. Micek, P. Morin, T. Ueckerdt, and D. R. Wood. Planar graphs have
645
+ bounded queue-number. Journal of the ACM, 67(4):1–38, 2020.
646
+ [16] M. Dumas, F. Foucaud, A. Perez, and I. Todinca. On graphs coverable by k shortest paths. In
647
+ Proceedings of the 33rd International Symposium on Algorithms and Computation, ISAAC 2022,
648
+ volume 248 of LIPIcs, pages 40:1–40:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
649
+ [17] D. C. Fisher and S. L. Fitzpatrick. The isometric number of a graph. Journal of Combinatorial
650
+ Mathematics and Combinatorial Computing, 38(1):97–110, 2001.
651
+ [18] S. L. Fitzpatrick, R. J. Nowakowski, D. A. Holton, and I. Caines. Covering hypercubes by isometric
652
+ paths. Discrete Mathematics, 240(1-3):253–260, 2001.
653
+ [19] Mikhael Gromov. Hyperbolic groups. In Essays in group theory, pages 75–263. Springer, 1987.
654
+ [20] J. M. Keil, J.S.B Mitchell, D. Pradhan, and M. Vatshelle. An algorithm for the maximum weight
655
+ independent set problem on outerstring graphs. Computational Geometry, 60:19–25, 2017.
656
+ [21] A. Kosowski, B. Li, N. Nisse, and K. Suchan. k-chordal graphs: From cops and robber to compact
657
+ routing via treewidth. Algorithmica, 72(3):758–777, 2015.
658
+ [22] J. Kratochvíl.
659
+ String graphs. I. the number of critical nonstring graphs is infinite.
660
+ Journal of
661
+ Combinatorial Theory, Series B, 52(1):53–66, 1991.
662
+ [23] J. Pan and G. J. Chang. Isometric-path numbers of block graphs. Information processing letters,
663
+ 93(2):99–102, 2005.
664
+ [24] J. Pan and G. J. Chang. Isometric path numbers of graphs. Discrete mathematics, 306(17):2091–
665
+ 2096, 2006.
666
+ [25] A. Rok and B. Walczak. Outerstring graphs are χ-bounded. SIAM Journal on Discrete Mathematics,
667
+ 33(4):2181–2199, 2019.
668
+ 12
669
+
670
+ [26] Y. Shavitt and T. Tankel. On the curvature of the internet and its usage for overlay construction
671
+ and distance estimation. In IEEE INFOCOM 2004, volume 1. IEEE, 2004.
672
+ [27] M. Thiessen and T. Gaertner. Active learning of convex halfspaces on graphs. In Proceedings of
673
+ the 35th Conference on Neural Information Processing Systems, NeurIPS 2021, volume 34, pages
674
+ 23413–23425. Curran Associates, Inc., 2021.
675
+ [28] Nicolas Trotignon. Perfect graphs: a survey. arXiv preprint arXiv:1301.5149, 2013.
676
+ [29] Nicolas Trotignon. Private communication, 2022.
677
+ [30] Kristina Vušković. The world of hereditary graph classes viewed through truemper configurations.
678
+ Surveys in Combinatorics 2013, 409:265, 2013.
679
+ [31] J. A. Walter and H. Ritter. On interactive visualization of high-dimensional data using the hyperbolic
680
+ plane. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery
681
+ and data mining, pages 123–132, 2002.
682
+ [32] M. E. Watkins and D. M. Mesner. Cycles and connectivity in graphs. Canadian Journal of Mathe-
683
+ matics, 19:1319–1328, 1967.
684
+ 13
685
+
4NAyT4oBgHgl3EQfcPdw/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NFJT4oBgHgl3EQfkiwH/content/tmp_files/2301.11579v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NFJT4oBgHgl3EQfkiwH/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6dE3T4oBgHgl3EQfRAnz/content/2301.04418v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e33fb555c0f91f38c143f1070d220c3a41736585dc0da8e9f0b4d947cab426f1
3
+ size 2222494
6dE3T4oBgHgl3EQfRAnz/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1565af911fe54791ec2aa295b7e9769b6baa24a63de108c3bd97650a5df14093
3
+ size 10682413
6dE3T4oBgHgl3EQfRAnz/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:031e2236ca1e9c4414abf6c1596b1b9a582a8f5d502615ad8ac3fa515a2e1208
3
+ size 315918
89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97931ff0d176e6c4218eb02a51c1d71caca4981a3773006f18ba27f00b0486f9
3
+ size 2693792
89AzT4oBgHgl3EQfFPox/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:664fa1ce241d7db864494e9be57326c6ae10d40a1a2bc76fcc58f3347adcae99
3
+ size 142042
8dFAT4oBgHgl3EQfpB03/content/2301.08637v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:172cd95261014f9d76fcce3ee9e36fd03d6b7faec1289cf711be3c23bab43e50
3
+ size 695421
8dFAT4oBgHgl3EQfpB03/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9074b0fb508a9ef3fc8a266512d06d37e86b40c6ba1836cad0721661292cf440
3
+ size 5636141
8dFAT4oBgHgl3EQfpB03/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c543559a6b113ec49c03458031b9f0444c1817610d6e6a7be070cd614c237c7
3
+ size 208545
9NFST4oBgHgl3EQfaziV/content/tmp_files/2301.13797v1.pdf.txt ADDED
@@ -0,0 +1,1418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ STOCHASTIC APPROACHES: MODELING THE PROBABILITY OF
2
+ ENCOUNTERS BETWEEN H2-MOLECULES AND METALLIC
3
+ ATOMIC CLUSTERS IN A CUBIC BOX
4
+ Maximiliano L. Riddick, Leandro Andrini
5
+ Instituto de Investigaciones Fisicoquimicas Teóricas y Aplicadas
6
+ Departamento de Química, Fac. de Ciencias Exactas (INIFTA/ UNLP-CONICET)
7
+ Departamento de Matemática, Fac. de Ciencias Exactas, UNLP
8
+ La Plata, Argentina
9
+ mriddick@mate.unlp.edu.ar
10
+ Enrique E. Álvarez
11
+ Instituto de Cálculo, Fac. Ciencias Exactas y Naturales, Ciudad Universitaria, Pabellón II, UBA (CABA)
12
+ Departamento de Fisicomatemática, Fac. de Ingeniería, UNLP
13
+ Ciudad Autónoma de Buenos Aires and La Plata, Argentina
14
+ Félix G. Requejo
15
+ Instituto de Investigaciones Fisicoquimicas Teóricas y Aplicadas
16
+ Departamento de Química, Fac. de Ciencias Exactas (INIFTA/ UNLP-CONICET)
17
+ Departamento de Física, Fac. de Ciencias Exactas, UNLP
18
+ La Plata, Argentina
19
+ ABSTRACT
20
+ In recent years the advance of chemical synthesis has made it possible to obtain “naked”clusters of
21
+ different transition metals. It is well known that cluster experiments allow studying the fundamental
22
+ reactive behavior of catalytic materials in an environment that avoids the complications present in
23
+ extended solid-phase research. In physicochemical terms, the question that arises is the chemical
24
+ reduction of metallic clusters could be affected by the presence of H2 molecules, that is, by the
25
+ probability of encounter that these small metal atomic agglomerates can have with these reducing
26
+ species. Therefore, we consider the stochastic movement of N molecules of hydrogen in a cubic
27
+ box containing M metallic atomic clusters in a confined region of the box. We use a Wiener process
28
+ to simulate the stochastic process, with σ given by the Maxwell-Boltzmann relationships, which
29
+ enabled us to obtain an analytical expression for the probability density function. This expression is
30
+ an exact expression, obtained under an original proposal outlined in this work, i.e. obtained from
31
+ considerations of mathematical rebounds. On this basis, we obtained the probability of encounter
32
+ for three different volumes, 0.1
33
+ 3, 0.2
34
+ 3 and 0.4
35
+ 3 m
36
+ 3, at three different temperatures in each case,
37
+ 293, 373 and 473 K, for 10
38
+ 1 ≤ N ≤ 10
39
+ 10, comparing the results with those obtained considering
40
+ the distribution of the position as a Truncated Normal Distribution. Finally, we observe that the
41
+ probability is significantly affected by the number N of molecules and by the size of the box, not by
42
+ the temperature.
43
+ Keywords Wiener Process · Probability of encounters · Molecular Collisions · Atomic-Clusters · Mathematical
44
+ Rebounds
45
+ arXiv:2301.13797v1 [cond-mat.mtrl-sci] 10 Jan 2023
46
+
47
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
48
+ 1
49
+ Introduction
50
+ In the last two decades there has been an important development in clusters chemistry, and consequently new questions
51
+ arise on the basis of these developments [1, 2, 3, 4, 5, 6, 7]. This interest is due to an atomic clusters containing
52
+ up to a few dozen atoms exhibit features that are very different from the corresponding bulk properties and that can
53
+ depend very sensitively on cluster size [8]. In particular, many of these transition metal clusters are used in the field of
54
+ catalysis [1, 9, 10, 11]. One of the basic principles of catalysis is that when the smaller the metal particles, the larger the
55
+ fraction of the metal atoms that are exposed at surfaces, where they are accessible to reactant molecules and available
56
+ for catalysis [1]. It is well known in chemistry that the encounter between two molecules can give rise to a chemical
57
+ reaction, and from the mathematical aspect there are two fundamental ways to represent these types of situations as
58
+ continuous, represented by differential equations whose variables are concentrations, or as discrete, represented by
59
+ stochastic processes whose variables are the number of molecules [12].
60
+ Without loss of generality, it can be considered that the molecular chemisorption is due to the encounter between
61
+ a molecule and a surface (or a cluster in this case) with the energy necessary for the phenomenon of adsorption to
62
+ occur [13]. Besides, the kinetics of hydrogen chemisorption by neutral gas-phase metal clusters exhibits a complex
63
+ dependence on both cluster size and metal type [14]. For different chemical purposes, for example, in the case of copper
64
+ clusters (Cun) is very important to have control of the chemisorption of hydrogen on these clusters, i.e. the formation of
65
+ Cun-H2 species [15].
66
+ From a reductionist point of view, the molecular chemisorption is a problem of encounter between bodies: metal clusters
67
+ and reactant molecules. In our first approximation (mathematical reduction) we will consider the problem as a problem
68
+ of encounter or collisions between bodies. We are interested in proposing this strategy because we are focused to answer
69
+ what is the probability of meeting between N hydrogen molecules (N-H2) and a fixed M metallic clusters (M-Men),
70
+ for a given time t, where the H2 move freely in a bounded volume V of R3-space. Under this assumption, we are going
71
+ to consider H2-molecules and Men-clusters as rigid spheres of radii r1 and r2, respectively. Then, it is considered that
72
+ there will be a collision whenever the center-to-center distance between an H2-molecule and a Men-cluster is equal to
73
+ r12 = r1 + r2 [16]. Also, in this context we propose the H2-molecules follow a Brownian motion, namely: (a) it has
74
+ continuous trajectories (sample paths) and (b) the increments of the paths in disjoint time intervals are independent zero
75
+ mean Gaussian random variables with variance proportional to the duration of the time interval [17].
76
+ The pioneering work of T.D. Gillespie [16, 18, 19] have given rise to a large number of works that are proposed different
77
+ algorithms for the calculation for numerically simulating the time evolution of a well-stirred chemically reacting system,
78
+ although despite recent major improvements in the efficiency of the stochastic simulation algorithm, its drawback
79
+ remains the great amount of computer time that is often required to simulate a desired amount of system time [20]. While
80
+ our method is a simple reduction to collisions of molecules, allows to calculate the probability of encounter (scheduled
81
+ in R) for a large number of molecules (≈ 106) and clusters (≈ 1020) with advantages regarding the cost of calculation,
82
+ and the effects of first approximation can provide statistical support to the design of experiments. This calculation
83
+ is possible using a stochastic model (Wiener process) in the context of considerations from the Maxwell-Boltzmann
84
+ theory.
85
+ 2
86
+ A first theoretical approaching
87
+ As we announced in the introduction, we will assume that hydrogen molecules have a random movement, whence
88
+ let H(t) = (X(t), Y (t), Z(t)) the random variable which specify the space point where the H2 hydrogen molecule
89
+ is at time t. Trivially, H(t) depends on an initially point H(0) = (x0, y0, z0). Thus, when the initial starting point is
90
+ undefined, H(t) = H(t, x0, y0, z0). Our interest is in how probably is that the distance between H(t) and a fixed point
91
+ (a, b, c) is smaller than ϵ. The fixed point (a, b, c) are the coordinates for Men.
92
+ Let’s consider the random variable D(t) as the variable that measures the distance between H(t) and the fixed point
93
+ (a, b, c). Following the classical Pythagorean relationship, D(t) =
94
+
95
+ (X(t) − a)2 + (Y (t) − b)2 + (Z(t) − c)2, and
96
+ in general D(t) = D(t, x0, y0, z0, a, b, c).
97
+ Now, given a time window [0, τ], let
98
+ Rτ :=
99
+
100
+
101
+
102
+ 1
103
+ if min
104
+ t∈[0,τ) D(t) ≤ ϵ,
105
+ 0
106
+ otherwise.
107
+ (1)
108
+ So, for a fixed t0 > 0, we define G(t0) = P(D(t0) ≤ ϵ) =
109
+ � ϵ
110
+ 0 D(s)ds. Then, P(Rτ = 1) =
111
+ � τ
112
+ 0 G(t)dt.
113
+ 2
114
+
115
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
116
+ Thus, given τ > 0, Rτ depends only on the initial values (x0, y0, z0, a, b, c). Now, if we have M-Men, the probability
117
+ that the H2 molecule does not meet with any of the clusters is P(Rτ1 = 0, Rτ2 = 0, ..., RτM = 0) = pA, where
118
+ Rτi, i ∈ {1, ..., M}, follows the definition given in the eq. 1.
119
+ If N-H2 molecules are in the environment, let Aj the event “the j-th hydrogen molecule meet with a metallic cluster”.
120
+ Under random starting points, we are interested in P(AC
121
+ 1 ∩ AC
122
+ 2 ∩ ... ∩ AC
123
+ N) = pN
124
+ A according to the independence
125
+ among the hydrogen molecules.
126
+ 2.1
127
+ Adaptation to our context
128
+ Next, we proceed to realize the analysis according to the Brownian Motion Theory [17], in which the movement of
129
+ the particle is independent among different axis, and we are going to assume that it follows a Wiener process [21, 22].
130
+ Then,
131
+ X(t) = x0 + WX(t)
132
+ Y (t) = y0 + WY (t)
133
+ Z(t) = z0 + WZ(t)
134
+ And we will say that WX(t), WY (t) and WZ(t) are following a Wiener processes with σ =
135
+
136
+ kbT
137
+ m , where kb is the
138
+ Boltzmann’s constant, T is the absolute temperature in Kelvin (K) and m is the H2’s mass in kg. That is, we are
139
+ imposing a physical behavior that obeys Maxwell-Boltzmann’s considerations. According this:
140
+ X(t) ∼ N(x0, σ2t)
141
+ Y (t) ∼ N(y0, σ2t)
142
+ Z(t) ∼ N(z0, σ2t)
143
+ With density function fX(x, t|x0), fY (y, t|y0) and fZ(z, t|z0), respectively. Under these assumptions:
144
+ fX(x, t|x0) =
145
+ 1
146
+
147
+ 2πσ2t
148
+ exp
149
+
150
+ −1
151
+ 2
152
+ �x − x0
153
+ σ
154
+
155
+ t
156
+ �2�
157
+ fY (y, t|y0) =
158
+ 1
159
+
160
+ 2πσ2t
161
+ exp
162
+
163
+ −1
164
+ 2
165
+ �y − y0
166
+ σ
167
+
168
+ t
169
+ �2�
170
+ fZ(z, t|z0) =
171
+ 1
172
+
173
+ 2πσ2t
174
+ exp
175
+
176
+ −1
177
+ 2
178
+ �z − z0
179
+ σ
180
+
181
+ t
182
+ �2�
183
+ 2.1.1
184
+ Unbounded conditions
185
+ Under unbounded conditions, as it is well known, the density of the particle position in the space for a fixed t follows
186
+ the expression:
187
+ fXY Z(x, y, z, t|x0, y0, z0) = fX(x, t|x0) · fY (y, t|y0) · fZ(z, t|z0) =
188
+ =
189
+ 1
190
+ (
191
+
192
+ 2πσ2t)3 exp
193
+
194
+ −1
195
+ 2
196
+ �(x − x0)2 + (y − y0)2 + (z − z0)2
197
+ σ2t
198
+ ��
199
+ This function is continuous in the variables x, y, z, t, then is also integrable in a measurable context. Because of this
200
+ fact, Fubini’s theorem is aplicable. Now, calling ν =
201
+
202
+ (x−x0)2+(y−y0)2+(z−z0)2
203
+ 2σ2
204
+ , and integrating over the variable t by
205
+ substitution, results:
206
+ 3
207
+
208
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
209
+ fXY Z(x, y, z, τ|x0, y0, z0) =
210
+ 1
211
+ −ν(
212
+
213
+ 2πσ2)3
214
+ � τ
215
+ 0
216
+ −ν
217
+ (
218
+
219
+ t)3 exp
220
+
221
+
222
+ � ν
223
+
224
+ t
225
+ �2�
226
+ dt
227
+ =
228
+ 1
229
+ ν(
230
+
231
+ 2πσ2)3
232
+ � ∞
233
+ √ ν
234
+ τ
235
+ e−u2du
236
+ Remembering that the erfc function [23] is defined by:
237
+ erfc(z) =
238
+ 2
239
+ √π
240
+ � ∞
241
+ z
242
+ e−t2dt
243
+ we conclude:
244
+ fXY Z(x, y, z, τ|x0, y0, z0) =
245
+ 1
246
+ 2πν(
247
+
248
+ 2σ2)3 erfc
249
+ ��ν
250
+ τ
251
+
252
+ From the physical-experimental perspective that the problem is lays out, the unbounded system lacks interest, so we
253
+ will proceed to study the case of the bounded system.
254
+ 2.1.2
255
+ Bounded conditions
256
+ We assume that the experiment takes place into a cubic recipe centered at the origin. This implies that X(t), Y (t) and
257
+ Z(t) ∈ [−L; L], for a fixed volume V = L3 in R3-space.
258
+ In a similar issue the traditional way of approaching is by “truncation" [24, 25]. A drawback of this approach is the
259
+ fact that the truncation does not represent precisely the reflection on the boundaries. An illustrative and motivational
260
+ argument is given by the following example: suppose a random walk of N = 4 steps, with starting point at the origin.
261
+ Then, the walker moves 1 step at right or left (with equal probability) at each step. Then, after four steps, the resultant
262
+ probabilities of the walker position are:
263
+ 0; with probability 3/8,
264
+ −2 or 2; with probability 2/8,
265
+ −4 or 4; with probability 1/16.
266
+ The probability values (under truncation) in the closed interval [−2, 2] for the values (−2, −1, 0, 1, 2) are, respectively:
267
+ (2/7, 0, 3/7, 0, 2/7)
268
+ With fixed boundaries, considering reflections at [−2, 2], we can construct the following Markov transition matrix P:
269
+ P =
270
+ 0
271
+ 1
272
+ 0
273
+ 0
274
+ 0
275
+ 1/2
276
+ 0
277
+ 1/2
278
+ 0
279
+ 0
280
+ 0
281
+ 1/2
282
+ 0
283
+ 1/2
284
+ 0
285
+ 0
286
+ 0
287
+ 1/2
288
+ 0
289
+ 1/2
290
+ 0
291
+ 0
292
+ 0
293
+ 1
294
+ 0
295
+ At the fourth step, after some algebra, we obtain the respectively mass point probability for the position of the walker.
296
+ This is provided by the stochastic vector
297
+ (1/4, 0, 1/2, 0, 1/4)
298
+ (given by the third file of P 4, i.e.: with starting point at the origin). At this point, is clearly the difference between
299
+ truncation and “rebounds" (considering reflection on the boundary).
300
+ We must modify the density of the position H(t) according to the particle rebounds (see Fig. 1). It is important to note
301
+ that the rebounds indicated in the figure in gray colour do not correspond to the physical rebounds of the particles in the
302
+ cubic box, but to the contributions of the displaced distribution considering an infinite behavior.
303
+ 4
304
+
305
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
306
+ Figure 1: In red colour an arbitrary normal distribution, N(0, σ). We observe in gray colour the folding of the normal
307
+ distribution at the edge of the box. We could see that A + B = 2L.
308
+ Inside the box, the derived density fB according to the variable X(t) ∼ N(x0, σ2t) with density function fX of the
309
+ particle position (for each dimension, see Fig. 1) follows the expression:
310
+ fB(x) = [fX(x) + fB+(x) + fB−(x)] × I[−L;L](x)
311
+ where:
312
+ fB+(x) = f(x + 2A) + f(x + 2A + 2B) + f(x + 2A + 2B + 2A) + ... =
313
+ = f(x + 2(L − x)) + f(x + 2(L − x) + 2(x − (−L)) + ...
314
+ = f(−x + 2L) + f(x + 4L) + f(−x + 6L) + ...
315
+ =
316
+
317
+
318
+ k=1
319
+ f((−1)kx + 2kL)
320
+ =
321
+
322
+
323
+ k=1
324
+ 1
325
+
326
+ 2πσ2t
327
+ exp
328
+
329
+ −1
330
+ 2
331
+ �((−1)kx + 2kL) − x0
332
+ σ
333
+
334
+ t
335
+ �2�
336
+ and
337
+ fB−(x) = f(x − 2B) + f(x − 2B − 2A) + f(x − 2B − 2A − 2B) + ... =
338
+ = f(x − 2(x − (−L))) + f(x − 2(x − (−L)) − 2(L − x)) + ...
339
+ = f(−x − 2L) + f(x − 4L) + f(−x − 6L) + ...
340
+ =
341
+
342
+
343
+ k=1
344
+ f((−1)kx − 2kL)
345
+ =
346
+
347
+
348
+ k=1
349
+ 1
350
+
351
+ 2πσ2t
352
+ exp
353
+
354
+ −1
355
+ 2
356
+ �((−1)kx − 2kL) − x0
357
+ σ
358
+
359
+ t
360
+ �2�
361
+ 5
362
+
363
+ B
364
+ B'
365
+ B
366
+ 0
367
+ XM.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
368
+ The proof that fB is a density function is straightforward its definition. Trivially, fB > 0, and by construction:
369
+ � ∞
370
+ −∞
371
+ fB(t)dt =
372
+ � L
373
+ −L
374
+ fB(t)dt =
375
+ � ∞
376
+ −∞
377
+ fX(t)dt = 1
378
+ For practical purposes, we now try to find an upper bound to this expression. Looking at in the model proposed, the
379
+ next constraint is straightforward |(−1)kx − x0| ≤ 2L.
380
+ Following these constraints:
381
+ fB+(x) =
382
+
383
+
384
+ k=1
385
+ 1
386
+
387
+ 2πσ2t
388
+ exp
389
+
390
+ −1
391
+ 2
392
+ �((−1)kx + 2kL) − x0
393
+ σ
394
+
395
+ t
396
+ �2�
397
+
398
+
399
+
400
+ k=1
401
+ 1
402
+
403
+ 2πσ2t
404
+ exp
405
+
406
+ −1
407
+ 2
408
+ �−2L + 2kL
409
+ σ
410
+
411
+ t
412
+ �2�
413
+ =
414
+
415
+
416
+ k=1
417
+ 1
418
+
419
+ 2πσ2t
420
+ exp
421
+
422
+ −1
423
+ 2
424
+ �2(k − 1)L
425
+ σ
426
+
427
+ t
428
+ �2�
429
+ =
430
+
431
+
432
+ k=0
433
+ 1
434
+
435
+ 2πσ2t
436
+ exp
437
+
438
+ −1
439
+ 2
440
+ � 2kL
441
+ σ
442
+
443
+ t
444
+ �2�
445
+ =
446
+ 1
447
+
448
+ 2πσ2t
449
+
450
+
451
+ k=0
452
+ exp
453
+
454
+ −1
455
+ 2
456
+ �4L2
457
+ σ2t
458
+ ��k2
459
+ It is known that
460
+
461
+
462
+ k=0
463
+ rk2 = 1
464
+ 2 + 1
465
+ 2ΘE[3, 0, r] where ΘE is the Jacobi theta elliptic function [23]. So:
466
+ fB+(x) ≤
467
+ 1
468
+
469
+ 2πσ2t
470
+
471
+
472
+ k=0
473
+ exp
474
+
475
+ −1
476
+ 2
477
+ �4L2
478
+ σ2t
479
+ ��k2
480
+ =
481
+ 1
482
+
483
+ 2πσ2t
484
+ �1
485
+ 2 + 1
486
+ 2ΘE
487
+
488
+ 3, 0, exp
489
+
490
+ −1
491
+ 2
492
+ �4L2
493
+ σ2t
494
+ ����
495
+ and
496
+ fB−(x) =
497
+
498
+
499
+ k=1
500
+ 1
501
+
502
+ 2πσ2t
503
+ exp
504
+
505
+ −1
506
+ 2
507
+ �((−1)kx − 2kL) − x0
508
+ σ
509
+
510
+ t
511
+ �2�
512
+
513
+
514
+
515
+ k=1
516
+ 1
517
+
518
+ 2πσ2t
519
+ exp
520
+
521
+ −1
522
+ 2
523
+ �−2L − 2kL
524
+ σ
525
+
526
+ t
527
+ �2�
528
+ =
529
+
530
+
531
+ k=1
532
+ 1
533
+
534
+ 2πσ2t
535
+ exp
536
+
537
+ −1
538
+ 2
539
+ �−2(k + 1)L
540
+ σ
541
+
542
+ t
543
+ �2�
544
+ =
545
+
546
+
547
+ k=2
548
+ 1
549
+
550
+ 2πσ2t
551
+ exp
552
+
553
+ −1
554
+ 2
555
+ �−2kL
556
+ σ
557
+
558
+ t
559
+ �2�
560
+ =
561
+ 1
562
+
563
+ 2πσ2t
564
+ � ∞
565
+
566
+ k=0
567
+ exp
568
+
569
+ −1
570
+ 2
571
+ �4L2
572
+ σ2t
573
+ ��k2
574
+ − 1 − exp
575
+
576
+ −1
577
+ 2
578
+ �4L2
579
+ σ2t
580
+ ���
581
+ =
582
+ 1
583
+
584
+ 2πσ2t
585
+ �1
586
+ 2 + 1
587
+ 2ΘE
588
+
589
+ 3, 0, exp
590
+
591
+ −1
592
+ 2
593
+ �4L2
594
+ σ2t
595
+ ���
596
+ − 1 − exp
597
+
598
+ −1
599
+ 2
600
+ �4L2
601
+ σ2t
602
+ ���
603
+ 6
604
+
605
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
606
+ Then,
607
+ fB+(x) + fB−(x) ≤
608
+ 1
609
+
610
+ 2πσ2t
611
+
612
+ ΘE
613
+
614
+ 3, 0, exp
615
+
616
+ −1
617
+ 2
618
+ �4L2
619
+ σ2t
620
+ ���
621
+ − exp
622
+
623
+ −1
624
+ 2
625
+ �4L2
626
+ σ2t
627
+ ���
628
+ = CB
629
+ For each x ∈ [−L, L], fB(x) ≤ fX(x) + CB. Besides CB does not depends on x. Consequently, we have a maximum
630
+ for the density fB which is equal to f(x0) + CB.
631
+ Calling PB = (f(x0) + CB).2ϵ, we can conclude that:
632
+ P(X(t) ∈ (a0 − ϵ, a0 + ϵ)) ≤ PB, for any a0.
633
+ Analogous, CB is the same for the variables Y (t) and Z(t), and we know that f(x0) = f(y0) = f(z0). Then, the
634
+ same result is available for the variables Y (t) and Z(t). According the bounded CB, it is straightforward the uniform
635
+ convergence of the series fB+ and fB− (by the M Weierstrass criteria). An important fact to remark is that PB is not
636
+ even a probability, but in the case in we are interested, we know that is a real number bigger than the probability desired,
637
+ and then, under certain conditions, we can work with it.
638
+ For practical purposes, the error through the CB implementation can be minimized, since the first S terms are available,
639
+ and the tail can be compared with
640
+ S−1
641
+
642
+ k=0
643
+ rk2 ≤
644
+
645
+
646
+ k=0
647
+ rk2 =
648
+ S−1
649
+
650
+ k=0
651
+ rk2 +
652
+
653
+
654
+ k=S
655
+ rk2
656
+ And,
657
+
658
+
659
+ k=S
660
+ rk2 =
661
+
662
+
663
+ k=0
664
+ r(k+S)2 =
665
+
666
+
667
+ k=0
668
+ rk2+2kS+S2 = rS2
669
+
670
+
671
+ k=0
672
+ rk2r2kS ≤ rS2
673
+
674
+
675
+ k=0
676
+ rk2
677
+ Then,
678
+
679
+
680
+ k=S
681
+ rk2 ≤ rS2 �1
682
+ 2 + 1
683
+ 2ΘE[3, 0, r]
684
+
685
+ Controlling the value of S controls the value of the error made by truncating the sum. As we said, CB does not depend
686
+ on x, thus, the desired probability can be estimated with any degree of accuracy, according the computational cost
687
+ necessary to this development.
688
+ Taking into consideration the Brownian Motion Theory, in the time lapse of 1 second, the particle position under
689
+ unbounded conditions follows a N(x0, σ2) distribution. To discretize the problem, if we partitioned the time axis of τ
690
+ seconds in τ intervals of 1 second each one, then:
691
+ P(H(t) ∈ Bϵ(a, b, c)) ≤ P(H(t) ∈ Qϵ(a, b, c))
692
+ where Qϵ(a, b, c) denotes the cube centered in (a, b, c) with side size 2 × ϵ. And, considering the independence
693
+ between X(t), Y (t) and Z(t), with X(t) ∈ (a − ϵ, a + ϵ), Y (t) ∈ (b − ϵ, b + ϵ) and Z(t) ∈ (c − ϵ, c + ϵ),
694
+ P(H(t) ∈ Qϵ(a, b, c)) = P(H ∈ Qϵ) is
695
+ P(H ∈ Qϵ) = P(X(t)) × P(Y (t)) × P(Z(t)) ≤ PB × PB × PB = P 3
696
+ B
697
+ For each second τj for τj ∈ {1 : τ}, P(H(t) ∈ Qϵ(a, b, c)) ≤ P 3
698
+ B. Then, under the Wiener process formulation,
699
+ H(τj) ⊥ H(τk|τj) if j ̸= k, j ≤ k.
700
+ 7
701
+
702
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
703
+ P(H(τj) ∈ Qϵ(a, b, c)) ≤ P 3
704
+ B, ∀τj ∈ {1 : τ}. Calling F :=“# of τj ∈ {1 : T} in which H(τj) ∈ Qϵ(a, b, c)", we are
705
+ interesting in the event F = 0.
706
+ According its nature, F is a Binomial random variable B(τ, P 3
707
+ B). Consequently, the non-collision probability is
708
+ pNC = P(F = 0) ≤ (1 − P 3
709
+ B)τ. At this point, we only can conclude that the probability of the encounter between
710
+ a hydrogen molecule and a Men cluster in a time τ is less than p. We proceed to analyze what happens when the
711
+ number of hydrogen molecules and metallic clusters increase. We emphasize that the H2 molecules have a random
712
+ movement while the clusters are confined in a fixed region of space. Since p is the probability that a random hydrogen
713
+ molecule meets in the cube Qϵ in which a Men cluster is, the most unfavorable case with M clusters is when there is no
714
+ intersection among the cubes that contain it. In this case:
715
+ pA = P(Rτ1 = 0, ..., RτM = 0)
716
+ = 1 −
717
+ M
718
+
719
+ i=1
720
+ P(Rτi = 1)
721
+ ≥ 1 −
722
+ � M
723
+
724
+ i=1
725
+ P(Rτi = 1)
726
+
727
+ = 1 − M × p
728
+ In view of this analysis, we can conclude that the non-collision probability is higher than pNC.
729
+ In regular conditions, when this approach is used, the values of pNC and N outcomes into a several numerical instability.
730
+ In this case, the small value of pNC and the large value of N place us in conditions to use the Poisson approach to
731
+ the Binomial distribution (with parameter λ = N × p). Then, P(X = 0) ≈ exp(−λ). Even in the cases when the
732
+ probability is still unavailable, the expected number of collisions is presented according a time window, and then we can
733
+ estimate the probability of collisions in a time window T using the relationship between the Poisson and Exponential
734
+ distributions[26].
735
+ Next, we present the results of the analysis whit different box dimensions (in meters) and number of hydrogen molecules
736
+ (N), according to M = 1.9 × 10
737
+ 20 Cu20-clusters [27], where the Cu20-clusters have been considered as spheres.
738
+ 3
739
+ Results and analysis
740
+ 3.1
741
+ Obtaining non-collision probability values
742
+ The situation we consider is approximately a “realistic”situation, with M = 1.9 × 10
743
+ 20 Cu20-clusters in a cubic box
744
+ according to the standard dimensions of reaction chambers (0.1
745
+ 3, 0.2
746
+ 3 and 0.4
747
+ 3 m
748
+ 3), and a variable N-H2-molecules
749
+ “contamination”(10
750
+ 1 ≤ N ≤ 10
751
+ 10). It worked with three temperatures, T, 293, 373 and 473 K. The choice of T is
752
+ arbitrary, conditioned by the possible reaction temperatures [28].
753
+ In Fig. 2 we observe the results obtained for the simulations, considering the maximum sum. That is, take S = 10
754
+ 6,
755
+ perform the sum, and add the maximum level for the error. Clearly, a greater probability of non-collision, pNC, is
756
+ observed depending on the increase in volume.
757
+ For a detailed study, we proceed as follows: we model the data obtained through a non-linear graphic fitting considering
758
+ a Boltzmann decrease function, g(x) = A2 +
759
+ A1−A2
760
+ 1+exp
761
+ � x−x0
762
+ dx
763
+ � (see Fig. 3). In the Appendix A.2 we show the statistical
764
+ results for each parameter in each data fitting.
765
+ Under these considerations, we can calculate the critical value (criticality)[29, 30] of hydrogen molecules, that is “what
766
+ is the value of N for which the non-collision probability is greater than 1
767
+ 2”, i.e. the value of the exponent for which
768
+ 1
769
+ 2 < pNC.
770
+ It should be clarified that, in the strict physical sense, there is no abrupt phase transition to consider “criticality”. As
771
+ we assumed in the introduction, we consider that there is a chemical reaction if there is an encounter between two
772
+ molecules, and under this assumption we are considering as critical the level of presence of hydrogen for a chemical
773
+ reaction to occur. In any case, it can be demonstrated that there is an “abrupt”transition behavior, for a well defined
774
+ interval in the number of molecules. In Fig. 3 we can observe this behavior.
775
+ 8
776
+
777
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
778
+ Figure 2: Results for the non-collision probability, pNC, vs. ln(N) for L = 0.05m (V1), L = 0.1m (V2) and L = 0.2m
779
+ (V3), at T = 293 K (blue square), 373 K (black star) and 493 K (red triangle).
780
+ Figure 3: Data (blue square) modeling using a non-linear Boltzmann decrease function (green line).
781
+ 9
782
+
783
+ V1
784
+ V2
785
+ 1.0 -
786
+ 4
787
+ GD
788
+
789
+ 0.8
790
+ 0
791
+
792
+
793
+ 293 K
794
+ 0.6
795
+ V
796
+ 373 K
797
+ 0
798
+ 473 K
799
+
800
+ 0.4 -
801
+
802
+ 0.2 -
803
+ 0.0-
804
+ 支立文支安
805
+ 123456789101234567891012345678910
806
+ Ln(N)293 K
807
+ 1.0 -
808
+ Boltzmann Fit 293 K
809
+ 0.8
810
+ 0.6
811
+ 0.4
812
+ 0.2
813
+ 0.0
814
+ 0
815
+ 3
816
+ 4
817
+ 5
818
+ 6
819
+ 7
820
+ 8
821
+ 10
822
+ Ln(N)M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
823
+ Table 1: Critical values obtained from the decrease model for each box and each temperature, for mathematical
824
+ robounds.
825
+ L [m]
826
+ 293 K
827
+ 373 K
828
+ 473 K
829
+ 0.05
830
+ 3.25
831
+ 3.16
832
+ 3.09
833
+ 0.10
834
+ 4.97
835
+ 4.86
836
+ 4.72
837
+ 0.20
838
+ 5.96
839
+ 5.96
840
+ 5.96
841
+ Figure 4: Results for the non-collision probability, pNC, vs. ln(N) for L = 0.05m (V1), at T = 293 K, 373 K and 493
842
+ K. Comparison between models:“xxx Trunc”correspond to the truncated normal model and “xxx K”to the mathematical
843
+ rebound model.
844
+ In Table 1 we can see the critical values obtained from the decrease model for each box and each temperature. For the
845
+ smallest volumes, V1 and V2, it is observed that the critical value of N depends more strongly on the temperature than
846
+ in the case of the larger volume (V3). Although it is remarkable the fact of dependence with the size of the box, it can
847
+ be seen directly from Fig. 2. In this way, and under these simplified assumptions, we can obtain control of contaminant
848
+ molecules in relation to the volume and temperature parameters. Linear behavior is evident from the values obtained
849
+ (Table 1, N vs. temperature). Moreover, as the volume increases the slope increases from negative values to null value.
850
+ 4
851
+ Conclusion
852
+ By way of conclusion, it can be indicated that considering a Wiener stochastic process, for thermodynamic-statistical
853
+ movements of a gas confined in a box, and considering mathematical rebounds bounded by the physical-geometric
854
+ contour of the problem, the analytical expression could be obtained for the probability density function of encounters
855
+ between two differentiated species of molecules (one of the species fixed in the box -solid or liquid- and the other
856
+ species is a gas whose molecules move stochastically). In addition, the function obtained can be calculated numerically
857
+ or can be bounded. The bounded process allows to reduce the computational cost, and to limit the error from cutting the
858
+ Table 2: Critical values obtained from the decrease model for each box and each temperature, for truncated normal
859
+ model.
860
+ L [m]
861
+ 293 K
862
+ 373 K
863
+ 473 K
864
+ 0.05
865
+ 3.27
866
+ 3.18
867
+ 3.12
868
+ 0.10
869
+ 5.01
870
+ 4.91
871
+ 4.76
872
+ 0.20
873
+ 6.00
874
+ 5.98
875
+ 5.96
876
+ 10
877
+
878
+ 293 Trunc
879
+ ☆373 Trunc
880
+ 473 Trunc
881
+
882
+ 293 K
883
+ 373 K
884
+ 473 K
885
+ 1.0 0
886
+ 0
887
+
888
+ Non-collision probability
889
+ 0.8
890
+ 8
891
+ 0.6 -
892
+ 0.4
893
+ 0.2 +
894
+ 0.0+
895
+ 口口OO口
896
+ 1 2 3 4 5 6 7 8 9101 2 3 4 5 6 7 8 9101 2 3 4 5 6 7 8 910
897
+ Ln(N)M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
898
+ Figure 5: Results for the non-collision probability, pNC, vs. ln(N) for L = 0.1m (V1), at T = 293 K, 373 K and 493 K.
899
+ Comparison between models:“xxx Trunc”correspond to the truncated normal model and “xxx K”to the mathematical
900
+ rebound model.
901
+ Figure 6: Results for the non-collision probability, pNC, vs. ln(N) for L = 0.2m (V1), at T = 293 K, 373 K and 493 K.
902
+ Comparison between models:“xxx Trunc”correspond to the truncated normal model and “xxx K”to the mathematical
903
+ rebound model.
904
+ 11
905
+
906
+ 293 Trunc
907
+ ☆373 Trunc
908
+ → 473 Trunc
909
+
910
+ 293
911
+ ★373
912
+ 473
913
+ ★★★
914
+
915
+ Non-collision probability
916
+ 0.8.
917
+ 0.6 -
918
+ 0.4
919
+ 0.2 +
920
+ 0.0 -
921
+ OOOOG
922
+ 1 2 3 4 5 6 7 8 9101 2 3 4 5 6 7 8 9101 2 3 4 5 6 7 8 910
923
+ Ln(N)293 Trunc
924
+ ☆373 Trunc
925
+ → 473 Trunc
926
+
927
+ 293 K
928
+ 373 K
929
+ 473K
930
+ 1.0-
931
+ Non-collision probability
932
+ 0.8
933
+ 0.6
934
+ 0.4
935
+ 0.2
936
+ 0.0-
937
+ 123456789101234567891012345678910
938
+ Ln(N)M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
939
+ sum in a finite number. In particular, there is an error control that can be made, and it is possible to refine the process
940
+ according to the precision required.
941
+ From the physical-chemical point of view, it is observed that both the number of gas molecules and the dimensions of
942
+ the box affect the probability of encounter. For this model, temperature is a parameter that has a lower incidence on the
943
+ values of the probability of encounter. At this point some considerations have to be made. The first is that in a strict
944
+ sense a chemical reaction is more than the encounter of two chemical entities. The second is the exceptional chemical
945
+ nature of metal clusters, which make them highly reactive. Despite the simplicity of the model we are proposing, this
946
+ model can account in an experiment design about the collision probability between two chemical entities (and this
947
+ collision can lead to a chemical reaction).
948
+ From the point of view of computation, it is a system that requires less computational cost (time + memory) than the
949
+ algorithmic systems developed for this type of problems, so it contributes as a test method in the design of experiments.
950
+ The comparison with an established method (truncated normal model) was optimal. In the method of mathematical
951
+ rebounds the number of molecules needed for a reaction is less than the number obtained by the truncated normal
952
+ model. This is an advantage when strict contamination control is needed.
953
+ On the other hand, in terms of obtaining the density function, mathematical results can be generalized for volumes of
954
+ rectangular prisms of uneven sides. In addition, it remains to calculate the first and second order moments of the density
955
+ function obtained, work that exceeded the purposes of present communication.
956
+ Acknowledgments
957
+ This was was supported in part by PICT-2019-0784, PICT-2017-3944, PICT-2017-1220, PICT-2017-3150 (PICT,
958
+ Agencia Nacional de Promoción de la Investigación, el Desarrollo Tecnológico y la Innovación) and PPID-I231 (PPID,
959
+ Universiad Nacional de La Plata).
960
+ References
961
+ [1] Bruce C. Gates. Supported metal clusters: synthesis, structure, and catalysis. Chemical reviews, 95(3):511–522,
962
+ 1995.
963
+ [2] M Arturo López-Quintela. Synthesis of nanomaterials in microemulsions: formation mechanisms and growth
964
+ control. Current Opinion in Colloid & Interface Science, 8(2):137–144, 2003.
965
+ [3] Puru Jena and A. Welford Castleman Jr. Clusters: A bridge across the disciplines of physics and chemistry.
966
+ Proceedings of the National Academy of Sciences, 103(28):10560–10569, 2006.
967
+ [4] Shahana Huseyinova, Joseé Blanco, Feélix G. Requejo, Joseé M Ramallo-López, M Carmen Blanco, David
968
+ Buceta, and M Arturo Loópez-Quintela. Synthesis of highly stable surfactant-free cu5 clusters in water. The
969
+ Journal of Physical Chemistry C, 120(29):15902–15908, 2016.
970
+ [5] Lichen Liu and Avelino Corma. Confining isolated atoms and clusters in crystalline porous materials for catalysis.
971
+ Nature Reviews Materials, 6(3):244–263, 2021.
972
+ [6] Huixia Luo, Peifeng Yu, Guowei Li, and Kai Yan. Topological quantum materials for energy conversion and
973
+ storage. Nature Reviews Physics, 4(9):611–624, 2022.
974
+ [7] Seunghoon Lee, Joonho Lee, Huanchen Zhai, Yu Tong, Alexander M Dalzell, Ashutosh Kumar, Phillip Helms,
975
+ Johnnie Gray, Zhi-Hao Cui, Wenyuan Liu, et al. Is there evidence for exponential quantum advantage in quantum
976
+ chemistry? arXiv preprint arXiv:2208.02199, 2022.
977
+ [8] Mingli Yang, Koblar A Jackson, Christof Koehler, Thomas Frauenheim, and Julius Jellinek. Structure and shape
978
+ variations in intermediate-size copper clusters. The Journal of chemical physics, 124(2):024308, 2006.
979
+ [9] Manfred T Reetz and Wolfgang Helbig. Size-selective synthesis of nanostructured transition metal clusters.
980
+ Journal of the American Chemical Society, 116(16):7401–7402, 1994.
981
+ [10] John D Aiken III and Richard G Finke. A review of modern transition-metal nanoclusters: their synthesis,
982
+ characterization, and applications in catalysis. Journal of Molecular Catalysis A: Chemical, 145(1-2):1–44, 1999.
983
+ [11] Gareth S Parkinson.
984
+ Unravelling single atom catalysis: The surface science approach.
985
+ arXiv preprint
986
+ arXiv:1706.09473, 2017.
987
+ [12] Michael A Gibson and Jehoshua Bruck. Efficient exact stochastic simulation of chemical systems with many
988
+ species and many channels. The journal of physical chemistry A, 104(9):1876–1889, 2000.
989
+ 12
990
+
991
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
992
+ [13] David E Brown, Douglas J Moffatt, and Robert A Wolkow. Isolation of an intrinsic precursor to molecular
993
+ chemisorption. Science, 279(5350):542–544, 1998.
994
+ [14] MR Zakin, RO Brickman, DM Cox, and A Kaldor. Dependence of metal cluster reaction kinetics on charge state.
995
+ ii. chemisorption of hydrogen by neutral and positively charged iron clusters. The Journal of chemical physics,
996
+ 88(10):6605–6610, 1988.
997
+ [15] Xiang-Jun Kuang, Xin-Qiang Wang, and Gao-Bin Liu. A density functional study on the adsorption of hydrogen
998
+ molecule onto small copper clusters. Journal of Chemical Sciences, 123(5):743–754, 2011.
999
+ [16] Daniel T Gillespie. Exact stochastic simulation of coupled chemical reactions. The journal of physical chemistry,
1000
+ 81(25):2340–2361, 1977.
1001
+ [17] Zeev Schuss. Theory and applications of stochastic processes: an analytical approach, volume 170. Springer
1002
+ Science & Business Media, 2009.
1003
+ [18] Daniel T Gillespie. A general method for numerically simulating the stochastic time evolution of coupled chemical
1004
+ reactions. Journal of computational physics, 22(4):403–434, 1976.
1005
+ [19] Daniel T Gillespie. Concerning the validity of the stochastic approach to chemical kinetics. Journal of Statistical
1006
+ Physics, 16(3):311–318, 1977.
1007
+ [20] Daniel T Gillespie. Approximate accelerated stochastic simulation of chemically reacting systems. The Journal of
1008
+ chemical physics, 115(4):1716–1733, 2001.
1009
+ [21] Ben Leimkuhler and Charles Matthews. Molecular dynamics. Interdisciplinary applied mathematics, 36, 2015.
1010
+ [22] Ben Leimkuhler and Charles Matthews. Numerical methods for stochastic molecular dynamics. In Molecular
1011
+ Dynamics, pages 261–328. Springer, 2015.
1012
+ [23] Wilhelm Magnus, Fritz Oberhettinger, and Raj Pal Soni. Formulas and theorems for the special functions of
1013
+ mathematical physics, volume 52. Springer Science & Business Media, 2013.
1014
+ [24] James J Heckman. The common structure of statistical models of truncation, sample selection and limited
1015
+ dependent variables and a simple estimator for such models. In Annals of economic and social measurement,
1016
+ volume 5, number 4, pages 475–492. NBER, 1976.
1017
+ [25] Charles M Stein. Estimation of the mean of a multivariate normal distribution. The annals of Statistics, pages
1018
+ 1135–1151, 1981.
1019
+ [26] Jeroen Gerritsen and J Rudi Strickler. Encounter probabilities and community structure in zooplankton: a
1020
+ mathematical model. Journal of the Fisheries Board of Canada, 34(1):73–82, 1977.
1021
+ [27] Leandro Andrini, Germán J Soldano, Marcelo M Mariscal, Félix G Requejo, and Yves Joly. Structure stability of
1022
+ free copper nanoclusters: Fsa-dft cu-building and fdm-xanes study. Journal of Electron Spectroscopy and Related
1023
+ Phenomena, 235:1–7, 2019.
1024
+ [28] Avelino Corma, Patricia Concepción, Mercedes Boronat, María J Sabater, Javier Navas, Miguel José Yacaman,
1025
+ Eduardo Larios, Álvaro Posadas, M Arturo López-Quintela, David Buceta, Ernest Mendoza, Gemma Guilera,
1026
+ and Álvaro Mayoral. Exceptional oxidation activity with size-controlled supported gold clusters of low atomicity.
1027
+ Nature Chemistry, 5(9):775–781, 2013.
1028
+ [29] Per Bak and Maya Paczuski. Complexity, contingency, and criticality. Proceedings of the National Academy of
1029
+ Sciences, 92(15):6689–6696, 1995.
1030
+ [30] Terrie M. Williams. Criticality in stochastic networks. Journal of the Operational Research Society, 43(4):353–357,
1031
+ 1992.
1032
+ Appendix
1033
+ A.1
1034
+ Errors in the Boltzmann model for the probability calculated according to mathematical rebounds.
1035
+ Program used: Origin 9.1
1036
+ In all cases, number of points is 10, and degrees of freedon is 6.
1037
+ 13
1038
+
1039
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
1040
+ L=0.05 m, T = 293 K
1041
+ Parameter
1042
+ Value
1043
+ Standard Error
1044
+ A1
1045
+ 0.991
1046
+ 0.009
1047
+ A2
1048
+ -0.0060
1049
+ 0.0008
1050
+ x0
1051
+ 4.98
1052
+ 0.02
1053
+ dx
1054
+ 0.30
1055
+ 0.03
1056
+ Reduced Chi-Sqr 2.66387 × 10
1057
+ −4
1058
+ Residual Sum of Squares: 0.0016
1059
+ Adj. R-Square: 0.99888
1060
+ L=0.05 m, T = 373 K
1061
+ Parameter
1062
+ Value
1063
+ Standard Error
1064
+ A1
1065
+ 0.981
1066
+ 0.006
1067
+ A2
1068
+ -0.005
1069
+ 0.003
1070
+ x0
1071
+ 3.16
1072
+ 0.01
1073
+ dx
1074
+ 0.22
1075
+ 0.02
1076
+ Reduced Chi-Sqr 7.93743 × 10
1077
+ −5
1078
+ Residual Sum of Squares: 4.76246 × 10
1079
+ −4
1080
+ Adj. R-Square: 0.99957
1081
+ L=0.05 m, T = 473 K
1082
+ Parameter
1083
+ Value
1084
+ Standard Error
1085
+ A1
1086
+ 0.976
1087
+ 0.008
1088
+ A2
1089
+ -0.0011
1090
+ 0.0009
1091
+ x0
1092
+ 3.10
1093
+ 0.02
1094
+ dx
1095
+ 0.21
1096
+ 0.03
1097
+ Reduced Chi-Sqr 1.30175 × 10
1098
+ −4
1099
+ Residual Sum of Squares: 7.8105 × 10
1100
+ −4
1101
+ Adj. R-Square: 0.99927
1102
+ L=0.1 m, T = 293 K
1103
+ Parameter
1104
+ Value
1105
+ Standard Error
1106
+ A1
1107
+ 0.991
1108
+ 0.009
1109
+ A2
1110
+ -0.0060
1111
+ 0.0011
1112
+ x0
1113
+ 4.98
1114
+ 0.02
1115
+ dx
1116
+ 0.30
1117
+ 0.03
1118
+ Reduced Chi-Sqr 2.66387 × 10
1119
+ −4
1120
+ Residual Sum of Squares: 0.0016
1121
+ Adj. R-Square: 0.99888
1122
+ L=0.1 m, T = 373 K
1123
+ Parameter
1124
+ Value
1125
+ Standard Error
1126
+ A1
1127
+ 0.994
1128
+ 0.008
1129
+ A2
1130
+ -0.006
1131
+ 0.002
1132
+ x0
1133
+ 4.88
1134
+ 0.02
1135
+ dx
1136
+ 0.33
1137
+ 0.02
1138
+ Reduced Chi-Sqr 1.88888 × 10
1139
+ −4
1140
+ Residual Sum of Squares: 0.00113
1141
+ Adj. R-Square: 0.9992
1142
+ 14
1143
+
1144
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
1145
+ L=0.1 m, T = 473 K
1146
+ Parameter
1147
+ Value
1148
+ Standard Error
1149
+ A1
1150
+ 0.996
1151
+ 0.005
1152
+ A2
1153
+ -0.0043
1154
+ 0.0019
1155
+ x0
1156
+ 4.73
1157
+ 0.01
1158
+ dx
1159
+ 0.33
1160
+ 0.01
1161
+ Reduced Chi-Sqr 7.76599 × 10
1162
+ −5
1163
+ Residual Sum of Squares: 4.65959 × 10
1164
+ −4
1165
+ Adj. R-Square: 0.99967
1166
+ L=0.2 m, T = 293 K
1167
+ Parameter
1168
+ Value
1169
+ Standard Error
1170
+ A1
1171
+ 0.993
1172
+ 0.003
1173
+ A2
1174
+ -0.0082
1175
+ 0.0025
1176
+ x0
1177
+ 5.97
1178
+ 0.02
1179
+ dx
1180
+ 0.31
1181
+ 0.03
1182
+ Reduced Chi-Sqr 2.64593 × 10
1183
+ −4
1184
+ Residual Sum of Squares: 0.00159
1185
+ Adj. R-Square: 0.9989
1186
+ L=0.2 m, T = 373 K
1187
+ Parameter
1188
+ Value
1189
+ Standard Error
1190
+ A1
1191
+ 0.993
1192
+ 0.008
1193
+ A2
1194
+ -0.0082
1195
+ 0.0025
1196
+ x0
1197
+ 5.97
1198
+ 0.02
1199
+ dx
1200
+ 0.31
1201
+ 0.03
1202
+ Reduced Chi-Sqr 2.64587 × 10
1203
+ −4
1204
+ Residual Sum of Squares: 0.00159
1205
+ Adj. R-Square: 0.9989
1206
+ L=0.2 m, T = 473 K
1207
+ Parameter
1208
+ Value
1209
+ Standard Error
1210
+ A1
1211
+ 0.993
1212
+ 0.008
1213
+ A2
1214
+ -0.0082
1215
+ 0.0025
1216
+ x0
1217
+ 5.97
1218
+ 0.02
1219
+ dx
1220
+ 0.31
1221
+ 0.03
1222
+ Reduced Chi-Sqr 2.64587 × 10
1223
+ −4
1224
+ Residual Sum of Squares: 0.00159
1225
+ Adj. R-Square: 0.9989
1226
+ A.2
1227
+ Errors in the Boltzmann model for the probability calculated according to the truncated normal model.
1228
+ L=0.05 m, T = 293 K
1229
+ Parameter
1230
+ Value
1231
+ Standard Error
1232
+ A1
1233
+ 0.982
1234
+ 0.006
1235
+ A2
1236
+ -0.0003
1237
+ 0.0001
1238
+ x0
1239
+ 3.29
1240
+ 0.01
1241
+ dx
1242
+ 0.24
1243
+ 0.01
1244
+ Reduced Chi-Sqr 6.6611 × 10
1245
+ −5
1246
+ Residual Sum of Squares: 0.000399
1247
+ 15
1248
+
1249
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
1250
+ Adj. R-Square: 0.99965
1251
+ L=0.05 m, T = 373 K
1252
+ Parameter
1253
+ Value
1254
+ Standard Error
1255
+ A1
1256
+ 0.982
1257
+ 0.006
1258
+ A2
1259
+ -0.004
1260
+ 0.003
1261
+ x0
1262
+ 3.19
1263
+ 0.02
1264
+ dx
1265
+ 0.22
1266
+ 0.02
1267
+ Reduced Chi-Sqr 6.69947 × 10
1268
+ −5
1269
+ Residual Sum of Squares: 4.0196 × 10
1270
+ −4
1271
+ Adj. R-Square: 0.99960
1272
+ L=0.05 m, T = 473 K
1273
+ Parameter
1274
+ Value
1275
+ Standard Error
1276
+ A1
1277
+ 0.982
1278
+ 0.005
1279
+ A2
1280
+ -0.0004
1281
+ 0.0003
1282
+ x0
1283
+ 3.19
1284
+ 0.01
1285
+ dx
1286
+ 0.22
1287
+ 0.02
1288
+ Reduced Chi-Sqr 6.69508 × 10
1289
+ −4
1290
+ Residual Sum of Squares: 4.01705 × 10
1291
+ −4
1292
+ Adj. R-Square: 0.99964
1293
+ L=0.1 m, T = 293 K
1294
+ Parameter
1295
+ Value
1296
+ Standard Error
1297
+ A1
1298
+ 0.991
1299
+ 0.003
1300
+ A2
1301
+ -0.0022
1302
+ 0.0009
1303
+ x0
1304
+ 5.02
1305
+ 0.07
1306
+ dx
1307
+ 0.21
1308
+ 0.03
1309
+ Reduced Chi-Sqr 5.5392 × 10
1310
+ −5
1311
+ Residual Sum of Squares: 0.00033
1312
+ Adj. R-Square: 0.99977
1313
+ L=0.1 m, T = 373 K
1314
+ Parameter
1315
+ Value
1316
+ Standard Error
1317
+ A1
1318
+ 0.992
1319
+ 0.006
1320
+ A2
1321
+ -0.0047
1322
+ 0.0025
1323
+ x0
1324
+ 4.92
1325
+ 0.01
1326
+ dx
1327
+ 0.29
1328
+ 0.02
1329
+ Reduced Chi-Sqr 1.20178 × 10
1330
+ −4
1331
+ Residual Sum of Squares: 0.000721
1332
+ Adj. R-Square: 0.9995
1333
+ L=0.1 m, T = 473 K
1334
+ Parameter
1335
+ Value
1336
+ Standard Error
1337
+ A1
1338
+ 0.995
1339
+ 0.005
1340
+ A2
1341
+ -0.0045
1342
+ 0.0025
1343
+ x0
1344
+ 4.77
1345
+ 0.05
1346
+ dx
1347
+ 0.33
1348
+ 0.01
1349
+ Reduced Chi-Sqr 8.70051 × 10
1350
+ −5
1351
+ Residual Sum of Squares: 5.2203 × 10
1352
+ −4
1353
+ 16
1354
+
1355
+ M.L. Riddick, Stochastic approaches: modeling the probability of encounters, arXiv.
1356
+ Adj. R-Square: 0.99963
1357
+ L=0.2 m, T = 293 K
1358
+ Parameter
1359
+ Value
1360
+ Standard Error
1361
+ A1
1362
+ 0.992
1363
+ 0.003
1364
+ A2
1365
+ -0.0078
1366
+ 0.0065
1367
+ x0
1368
+ 6.01
1369
+ 0.02
1370
+ dx
1371
+ 0.29
1372
+ 0.03
1373
+ Reduced Chi-Sqr 2.75184 × 10
1374
+ −4
1375
+ Residual Sum of Squares: 0.00165
1376
+ Adj. R-Square: 0.99885
1377
+ L=0.2 m, T = 373 K
1378
+ Parameter
1379
+ Value
1380
+ Standard Error
1381
+ A1
1382
+ 0.990
1383
+ 0.007
1384
+ A2
1385
+ -0.0068
1386
+ 0.0075
1387
+ x0
1388
+ 5.99
1389
+ 0.02
1390
+ dx
1391
+ 0.28
1392
+ 0.03
1393
+ Reduced Chi-Sqr 2.07471 × 10
1394
+ −4
1395
+ Residual Sum of Squares: 0.00124
1396
+ Adj. R-Square: 0.99913
1397
+ L=0.2 m, T = 473 K
1398
+ Parameter
1399
+ Value
1400
+ Standard Error
1401
+ A1
1402
+ 0.993
1403
+ 0.008
1404
+ A2
1405
+ -0.0082
1406
+ 0.0025
1407
+ x0
1408
+ 5.97
1409
+ 0.02
1410
+ dx
1411
+ 0.31
1412
+ 0.03
1413
+ Reduced Chi-Sqr 2.64587 × 10
1414
+ −4
1415
+ Residual Sum of Squares: 0.00159
1416
+ Adj. R-Square: 0.9989
1417
+ 17
1418
+
9tAzT4oBgHgl3EQfg_wg/content/tmp_files/2301.01476v1.pdf.txt ADDED
@@ -0,0 +1,1696 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Lessons Learned Applying Deep Learning Approaches to
2
+ Forecasting Complex Seasonal Behavior
3
+
4
+ Andrew T. Karl1, James Wisnowski1, Lambros Petropoulos2
5
+
6
+ 1Adsurgo LLC, Pensacola, FL
7
+ 2USAA, San Antonio, TX
8
+
9
+
10
+
11
+ Abstract
12
+ Deep learning methods have gained popularity in recent years through the media and the
13
+ relative ease of implementation through open source packages such as Keras. We
14
+ investigate the applicability of popular recurrent neural networks in forecasting call
15
+ center volumes at a large financial services company. These series are highly complex
16
+ with seasonal patterns - between hours of the day, day of the week, and time of the year -
17
+ in addition to autocorrelation between individual observations. Though we investigate the
18
+ financial services industry, the recommendations for modeling cyclical nonlinear
19
+ behavior generalize across all sectors. We explore the optimization of parameter settings
20
+ and convergence criteria for Elman (simple), Long Short-Term Memory (LTSM), and
21
+ Gated Recurrent Unit (GRU) RNNs from a practical point of view. A designed
22
+ experiment using actual call center data across many different “skills” (income call
23
+ streams) compares performance measured by validation error rates of the best observed
24
+ RNN configurations against other modern and classical forecasting techniques. We
25
+ summarize the utility of and considerations required for using deep learning methods in
26
+ forecasting.
27
+ Key Words: ARIMA, Time Series
28
+
29
+
30
+ 1. Introduction
31
+
32
+ Member contact call centers receive fluctuating call volumes depending on the day of the
33
+ week, the time of day, holidays, business conditions, and other factors. It is important for
34
+ call center managers to have accurate predictions of future call volumes in order to manage
35
+ staffing levels efficiently. The call center arrival process has been well documented and
36
+ explored in the literature (Gans, Koole, & Mandelbaum, 2003). In the application presented
37
+ here, there are several different “skills” (or “splits”) to which an incoming call may be
38
+ routed – depending on the capabilities of the call center agents – and an arrival volume
39
+ forecast is required for each skill in the short term for day-ahead or week-ahead predictions.
40
+
41
+ The weekly seasonality found in call arrivals can be modeled effectively through a variety
42
+ of methods to include Winter’s Seasonal Smoothing (Winters, 1960) or Autoregressive
43
+ Integrated Moving Average (Box & Jenkins, 1970). Some accessible references for many
44
+ of these concepts aimed at the practitioner are well documented in the literature (e.g.
45
+ Bisgaard & Kulachi, (2007, 2008)) while recommended texts are Bisgaard & Kulachi
46
+ (2011) and Montgomery, Jennings & Kulachi (2015).
47
+
48
+ Aiming to improve on these classic methods, “doubly stochastic” linear mixed models
49
+ (Aldor-Noiman, Feigin, & Mandelbaum, 2009) have effectively modeled additional
50
+
51
+ complexities as outlined in a recent review paper from Ibrahim, Ye, L'Ecuyer, & Shen
52
+ (2016). Similarly, Recurrent Neural Networks (RNNs) have been recommended as deep
53
+ learning approaches to forecast call volume for a wireless network (Bianchi et al., 2017) in
54
+ addition to numerous other applications including ride volumes with Uber (Zhu & Laptev,
55
+ 2017). While the doubly stochastic and RNN approaches to predicting call volumes offer
56
+ greater flexibility in modeling complex arrival behavior by incorporating exogenous
57
+ variables, this flexibility comes at the cost of greater computational and programming
58
+ complexity (as well as greater prediction variance). This paper explores practical aspects
59
+ of managing that complexity for these models, applies the models to actual call volumes
60
+ recorded by a large financial services company, and compares the prediction capability to
61
+ that of the more traditional Winters smoothing and ARIMA models.
62
+
63
+ First, we modify computational aspects of the doubly stochastic approach proposed by
64
+ Aldor-Noiman, Feigin, & Mandelbaum (2009) to improve call center forecasting
65
+ performance. Doubly stochastic implies a two-level randomization where not only are call
66
+ arrivals random variables, but also the call arrival mean parameter. Forecasts are produced
67
+ by taking advantage of the unique correlation structure for each split while accounting for
68
+ trend, seasonality, cyclical behavior, and serial dependence. The doubly stochastic model
69
+ is more complex than ordinary regression as it accounts for both inter- and intra-day
70
+ correlation. We suggest modifications to the originally proposed approach that lead to more
71
+ stable convergence and more flexible behavior when many splits need to be fit.
72
+
73
+ Secondly, we consider how RNNs may be used to model incoming call volume. Whereas
74
+ “traditional” densely connected, feedforward neural networks process each data point
75
+ independently, RNNs process sequences according to temporal ordering and retain
76
+ information from previous points in the sequence. As it processes points within sequences,
77
+ the RNN maintains states that contain information about what it has seen previously in the
78
+ sequence (Chollet & Allaire, 2018). This intra-sequence memory is useful in time series
79
+ applications to autocorrelated data. In the context of call center volumes, these sequences
80
+ could be constructed to correspond to individual days of observations over a fixed number
81
+ of (e.g. 30 minute) periods. Bianchi et al. (2017) consider three different RNN architectures
82
+ to model incoming call volume over a mobile phone network: Elman Recurrent Neural
83
+ Networks (ERNN) (Elman, 1990), Gated Recurrent Units (GRU) (Cho, et al., 2014), and
84
+ Long Short Term Memory (LSTM) (Hochrieter & Schmidhuber, 1997), listed in order of
85
+ increasing complexity.
86
+ These three RNNs, along with the dense neural network, are now available via the R Keras
87
+ package (Allaire & Chollet, 2018). Once code has been written for one of the RNNs, the
88
+ user can switch between the other two by toggling a single option (and, after data
89
+ reformatting, switch to a dense network). This offers the potential – via a designed
90
+ experiment – to produce a pragmatic answer to the question of which type of (R)NN
91
+ provides the best fit to the process at hand. Whereas Bianchi, Maiorino, Kampffmeyer,
92
+ Rizzi, & Jenssen (2017) created their experimental design by randomly generating points
93
+ within the design space and then selected the design that lead to the minimum error rate,
94
+ we create a full factorial design (treating all factors as categorical to allow arbitrary shape
95
+ in the otherise (discrete) continuous factor of number of nodes) and then explore the
96
+ behavior of the error rates across the design space with a profiler for the resulitng linear
97
+ model for the error rate as a function of the NN settings. Unlike ARIMA or regression
98
+ (including doubly stochastic) modeling approaches for time series, there is a stochastic
99
+ behavior in the predictions made by neural networks due to the use of randomly initialized
100
+ weights. Unless the seed for the software’s random number generator is fixed, repeated
101
+ fitting of the same neural network will lead to different predictions. The amount of
102
+
103
+ variation in the resulting predictions depends on the complexity of the network and on the
104
+ steps that have been taken to avoid overfitting, including early stopping of the optimizer.
105
+ When selecting a model configuration, we will not only want to minimize the expected
106
+ error rate, but also minimize the variability in the error rates. To this end, we seek to
107
+ minimize the upper 95% prediction interval on the testing error rate. The NN study
108
+ proceeds in two phases where a screening experiment first identifies the most useful
109
+ (R)NN, followed by a more comprehensive performance study against common
110
+ forecasting approaches across many more skills.
111
+
112
+ Section 2 describes the doubly stochastic model for call volumes and how modifications to
113
+ the originally proposed computational approach can lead to improved convergence.
114
+ Section 3 details how a full factorial design is used to characterize the performance of RNN
115
+ options as a function of five factors (and their interactions) on the resulting short-term
116
+ forecast error rate. Additionally, Section 3 describes the selection of the model
117
+ configuration that leads to the minimum upper bound on the 95% prediction interval for
118
+ the testing error rate. Due to the number of different model configurations that must be run
119
+ along with the computational complexity of RNNs, the first phase discussed in Section 3
120
+ considers only a limited number of skills and validation days. In Section 4, the best
121
+ performing RNNs are run over a larger validation set and over all call center skills to
122
+ compare the performance to the doubly stochastic mixed model approach, and to ARIMA
123
+ as well as Winters smoothing.
124
+
125
+ 2. Stable Settings for Fitting the Mixed Model
126
+
127
+ There are two distinct influences on call volumes that induce a correlation between the
128
+ observed call counts, violating the independence assumption made by ordinary least
129
+ squares regression models that might be used to model the volumes (Ibrahim & L'Ecuyer,
130
+ 2013). Within a given day, some event may lead to more/fewer calls than expected. For
131
+ example, unexpected behavior in the stock market in the morning may lead to an increased
132
+ number of calls for the rest of the day at a financial services contact center. This is intra-
133
+ day correlation. Likewise, there are systemic processes responsible for inter-day
134
+ correlation. Heuristically, if we noticed that the residuals are very large and positive
135
+ throughout the day today caused by a weather event for example, we might also expect a
136
+ larger-than-average call load tomorrow. Ignoring correlation between subsequent
137
+ observations leads to inaccurate standard errors and prediction intervals. In addition,
138
+ although the estimates from a linear regression may be unbiased in the presence of
139
+ correlated residuals, they will not be efficient (Demidenko, 2013).
140
+
141
+ It is typical for call center regression models to include a day-of-week by period-of-day
142
+ interaction (Ibrahim, Ye, L'Ecuyer, & Shen, 2016). In a call center open five days per
143
+ week with 32 half-hour periods per day, this interaction involves 160 fixed-effect
144
+ parameters. In addition, a call center may require forecasting for holidays. Aldor-
145
+ Noiman, Feigin, & Mandelbaum (2009) exclude holidays when training their model;
146
+ however, we cannot ignore these days because some splits operate on holidays and may
147
+ exhibit different behavior on those days. In order to capture this behavior, we include a
148
+ holiday indicator (holiday_ind) by period-of-day interaction effect in the model.
149
+ However, some training data sets may include only a single holiday, leading to high
150
+ variance in the parameter estimates for this effect (each period observation from that one
151
+ day becomes the new estimate for that period during holidays). To reduce the variability
152
+ of these estimates, we combine groups of 3 periods together on holidays. That is, periods
153
+ {1, 2, 3} are assigned p_group = 1, periods {4, 5, 6} are assigned p_group = 2, etc. The
154
+
155
+ p_group*holiday_ind interaction is included in the fixed effect structure as an additive
156
+ effect.
157
+
158
+ Following Aldor-Noiman et al. (2009), we fit a linear mixed model with correlated errors
159
+ to the transformed call counts
160
+ 𝑌 = 𝑿𝛽 + 𝒁𝑏 + 𝜀
161
+ where
162
+
163
+ 𝑌 is the vector transformed call counts, 𝑌 = √𝑐𝑜𝑢𝑛𝑡 + 0.25
164
+
165
+ 𝑿 is a matrix containing the levels of the fixed effects for each observation
166
+
167
+ 𝛽 is the vector of fixed effects parameters containing a day-of-week*period-of-day
168
+ interaction and a p_group*holiday-indicator interaction
169
+
170
+ 𝒁 is a binary coefficient matrix for the random day-to-day effects in the model.
171
+ There is one column for each day in the data.
172
+
173
+ 𝑏~𝑁(0, 𝑮 ) is the vector of random day-to-day effects. Each unique day in the data
174
+ set is represented by one random effect in b. G follows a first-order autoregressive
175
+ structure, AR(1).
176
+
177
+ 𝜀~𝑁(0, 𝑹 ) is the vector of error terms (residuals), allowing 𝜀 to potentially follow
178
+ an AR(1) process within days. Thus, R is a block-diagonal matrix, with one AR(1)
179
+ block for each day in the data set. This accounts for the potential correlation in
180
+ residuals from proximal periods within days.
181
+ The full model allows for complex correlation structures. However, for some splits (within
182
+ particular training data sets), there may be only sporadic and sparse occurrences of call
183
+ arrivals. This can lead to slow or failed model convergence in some cases. Aldor-Noiman
184
+ et al. (2009) address this by estimating the doubly stochastic model in two steps: first, the
185
+ inter-day correlation (G) is estimated using the aggregated total call counts from each day.
186
+ These parameters are then held constant in a second call to SAS PROC MIXED while 𝛽
187
+ and 𝑹 are estimated.
188
+
189
+ Indeed, PROC MIXED can experience convergence problems when the solutions lie on
190
+ the boundary of the parameter space, such as when variance components are zero (Karl,
191
+ Yang, & Lohr, 2013). However, after making modifications to the default PROC MIXED
192
+ settings, we were reliably able to achieve convergence of the full model with the joint
193
+ optimization of (𝛽, 𝑮, 𝑹) in a single call to PROC MIXED. In this regard, our approach
194
+ differs from that of Aldor-Noiman et al. (2009): we fit all of the model parameters jointly
195
+ (with a single call to PROC MIXED). This will lead to reduced bias in the estimates for
196
+ the models that do converge.
197
+
198
+ We improved convergence rates by changing the convergence criterion used by SAS
199
+ PROC MIXED. By default, SAS ensures that the sum of squared parameter gradients
200
+ (weighted by the current Hessian of the parameter estimates) is sufficiently small.
201
+ However, in the presence of strong correlations in the doubly stochastic model, the
202
+ parameter estimates may lie near the boundary of the parameter space, meaning the
203
+ gradients may not approach 0 with convergence (Demidenko, 2013). As an alternative, we
204
+ declare convergence when the relative change in the loglikelihood between iterations is
205
+ sufficiently small. Additionally, we employ Fisher scoring during the estimation process.
206
+ Fisher scoring is more stable for models with complex covariance structures and can lead
207
+ to better estimates of the asymptotic covariance (Demidenko, 2013). Finally, since our
208
+
209
+ application only uses the call volume point estimates and not the associated standard errors
210
+ or tests of significance, we specify ddfm=residual to avoid spending substantial time
211
+ calculating appropriate degrees of freedom for the approximate F-tests. If confidence or
212
+ prediction intervals are needed, this value should be set to ddfm=kenwardrodger2 in order
213
+ to calculate Satterthwaite approximations for the degrees of freedom and to apply the
214
+ Kenward-Rodger correction (Kenward & Roger, 2009) to the standard errors. The code for
215
+ our modified approach appears in Figure 1.
216
+
217
+ Figure 1 Modified SAS code for the Doubly Stochastic Model
218
+ The square root transformation is applied to reduce the right skew in the observed call
219
+ volumes, and to stabilize the variance of the observations since quantities such as call
220
+ volumes tend to follow a Poisson distribution. The approach in Figure 1 employs a normal
221
+ approximation of this process. We experimented with fitting a mixed Poisson regression to
222
+ the untransformed call volumes (via PROC GLIMMIX), but found that the run times
223
+ became unfeasibly long (even when using the default pseudolikelihood approach and
224
+ avoiding integral approximation) with no noticeable improvement in error rates.
225
+
226
+ 3. Choosing Recurrent Neural Network Configurations with a Designed Experiment
227
+
228
+ Generally, neural networks consist of layers of weights and nonlinear activation functions
229
+ that are used to relate inputs (predictors) to outputs (targets). Outputs from each layer are
230
+ passed sequentially to the next layer as an input vector. The complexity of each layer is
231
+ determined by the length of the output vector (number of nodes) it produces. A loss
232
+ function is used to compare the output of the final layer of the neural network to the
233
+ provided targets (e.g. call volumes), and an optimizer function provides updated values of
234
+ the weights each node that will decrease the resulting loss. The “depth” of the model is
235
+ controlled by the number of layers that are used. This “depth” is the source of the phrase
236
+ “deep learning”. For example, in image processing applications with convolutional neural
237
+ networks, the different layers can be shown to represent different levels of granularity of
238
+ detail in an image (Chollet & Allaire, 2018). Besides the number of layers and the
239
+ number of nodes per layer, there are a number of choices that must be made regarding the
240
+ properties of the optimizer, the distribution of the random initialization of the parameter
241
+ weights, and the shape of the activation function(s).
242
+
243
+ In a traditional, densely connected network, the individual observations are assumed to be
244
+ independent. A simple example using output from JMP Pro 14.1 helps to illustrate.
245
+ Suppose we want to fit a densely connected neural network to predict the standardized
246
+ call count using only the previous day’s standardized call count at the same period (the
247
+ lag-32 of the call count, since there are 32 periods per day in the example) as a predictor
248
+ with one node in one layer, using a hyperbolic tangent activation function. This network
249
+ shown in Figure 2 with resulting weights shown in Figure 3.
250
+
251
+ proc mixed data=training_data scoring=50 maxiter=150 maxfunc=10000 convf=1E-6;
252
+ class day_of_week period day_num split p_group;
253
+ by split;
254
+ /* The fixed effects */
255
+ model transf_call_count=day_of_week*period p_group*holiday_ind/
256
+ noint ddfm=residual outp=pred_call_count_output notest;
257
+ /* The day-level random effects */
258
+ /* Note: day num copy is not included in the clAss statment and is numeric *
259
+ random day_num / type=sp(pow)(day_num_copy);
260
+ /* The period-level correlated residuals */
261
+ run;
262
+ Figure 2 Densely connected neural network with one node in one layer.
263
+
264
+ Figure 3 Fitted weights from the network with one node in one layer.
265
+ Suppose that the lag-32 standardized call count is equal to 1 at a given period, t. Then the
266
+ neural network predicts a standardized call volume of
267
+ −0.6354 + 3.3923 ∗ TanH(0.5 ∗ (0.4046 + 0.5323 ∗ 1)) = 0.847
268
+ for the current period, t, where TanH is the hyperbolic tangent function and the 0.5
269
+ parameter is a fixed value. The nonlinear activation function provides the network with
270
+ the flexibility to model nonlinear relationships between the inputs and the response, as
271
+ well as interactions between the inputs.
272
+
273
+ We next consider a slightly more complex network with two layers using 2 nodes in the
274
+ first layer and 1 node in the second layer (Figure 4) with parameter estimates shown in
275
+ Figure 5.
276
+
277
+ Figure 4 A densely connected neural network with 2 layers using two nodes in the first
278
+ layer and one node in the second layer.
279
+
280
+ Lag[Standardize[cnt_ call], 32]
281
+ Standardize[cnt_call]Estimates
282
+ Parameter
283
+ Estimate
284
+ H1_ 1:Lag[Standardize[cnt_ call], 32]
285
+ 0.5323
286
+ H1_1:Intercept
287
+ 0.4046
288
+ Standardize[cnt call]_ 1:H1_ 1
289
+ 3.3923
290
+ Standardize[cnt call]_2:Intercept
291
+ -0.6354Lag[Standardize[cnt_ call], 32]
292
+ Standardize[cnt_call
293
+ Figure 5 Fitted weights from the network with two layers.
294
+ Again assuming that the lag-32 standardized call count is 1, the predicted value is
295
+ −0.82 + 4.9517
296
+ ∗ 𝑇𝑎𝑛𝐻 (0.5
297
+ ∗ (0.2416 − 0.7382 ∗ 𝑇𝑎𝑛𝐻(0.5 ∗ (−0.3633 − 0.8785 ∗ 1)) + 0.3057
298
+ ∗ 𝑇𝑎𝑛𝐻(0.5 ∗ (−0.0824 + 0.4124 ∗ 1)))) = 0.843
299
+ In the densely connected network, each observation is processed independently and there
300
+ is no “memory” of what happened in the previously processed observation. In time series
301
+ applications, however, there is a temporal ordering that the data are recorded in, and there
302
+ may be correlation between nearby observations. For example, a spike or drop in call
303
+ volume might persist over several periods. To address this potential, recurrent neural
304
+ networks record information when fitting each observation that is then provided as a
305
+ model input when fitting later observations.
306
+
307
+ In a simple (Elman) RNN layer, the output from each node (the output of the TanH
308
+ functions, referred to as the “state”) is recorded and stored, and used as an input for the
309
+ same node when processing the next observation. Note that there is one state recorded for
310
+ each node in the layer. For the single layer network example (Figure 3), the state was
311
+ calculated as 𝑠𝑡 = TanH(0.5 ∗ (0.4046 + 0.5323 ∗ 1)) = 0.44 when the lagged
312
+ standardized call volume was equal to 1. A simple RNN learns an extra parameter (say,
313
+ u) to act as a coefficient for the stored state, and the activation function TanH(0.5 ∗
314
+ (𝑤0 + 𝑤1 ∗ 𝑋𝑡)) that is used by the dense network would be replaced by 𝑠𝑡 =
315
+ TanH(0.5 ∗ (𝑤0 + 𝑤1 ∗ 𝑋𝑡 + 𝑢 ∗ 𝑠𝑡−1)) in order to fit a simple RNN. The LSTM and
316
+ GRU RNNs also use the recorded state when making predictions for the current time
317
+ period, along with products of additional activation functions that are designed to carry
318
+ state information further in time. Details of these additional structures are explained in
319
+ Section 3 of Bianchi et al. (2018). For our purpose it is sufficient to note that the GRU
320
+ network is extremely similar to the LSTM, albeit less complex due to the ommison of a
321
+ group of paramters. Chollet & Allaire (2018) remark that Google Translate currently runs
322
+ using an LSTM with seven large layers.
323
+ It is not clear a priori which of these four neural networks is most appropriate for a
324
+ particular call center. Furthermore, it is possible that each of these networks may have a
325
+ different optimal depth and structure when applied to the call center data. A designed
326
+ experiment is run to identify the optimal model type and structure.
327
+
328
+
329
+ Estimates
330
+ Parameter
331
+ Estimate
332
+ H2_1:Lag[Standardize[cnt _ call], 32]
333
+ -0.8785
334
+ H2_1:lntercept
335
+ -0.3633
336
+ H2_2:Lag[Standardize[cnt_call], 32]
337
+ 0.4124
338
+ H2_2:Intercept
339
+ -0.0824
340
+ H1_1:H2_1
341
+ -0.7382
342
+ H1_1:H2 2
343
+ 0.3057
344
+ H1_1:lntercept
345
+ 0.2416
346
+ Standardize[cnt_ call]_1:H1_1
347
+ 4.9517
348
+ Standardize[cnt_ call]_2:Intercept
349
+ -0.82003.1 Data for the Experiment
350
+ We analyze call volumes aggregated into 30 minute periods from 3 different large-volume
351
+ skills in an operational call center for the months of March-June 2018. All of the models
352
+ under consideration are used to forecast next-day call volumes using 5 weeks of training
353
+ data. Each day contains 32 30-minute periods during which the call center is operating.
354
+ This application considers the Monday through Friday behavior of the call skills. Due to
355
+ the use of the one-week lagged observations as a predictor in the neural networks, the first
356
+ week of training data is not included in the predictor matrix (since the prior week’s call
357
+ volumes are unknown), meaning each training data set consists of 4*5*32=640
358
+ observations. The designed experiment in this section will evaluate the methods using a
359
+ holdout period of the five one-day ahead predictions during last week in June using the
360
+ three largest skills from the call center. Section 4 will then fit a reduced set of models over
361
+ all skills for 60 day-ahead predictions.
362
+
363
+ 3.2 Neural Network Input Factors
364
+ Each network makes use of the one-hot encoding (via a binary indicator matrix) of the day
365
+ of week and the one-hot encoding of period of day. Furthermore, to capture the day- and
366
+ week-long correlations, the networks are also fed the call volumes for the same period in
367
+ the previous day when modeling the current day, as well as the call volumes for the same
368
+ period on the same day of last week. One-period lagged call volumes are not included, as
369
+ this is the purpose of the within-sequence memory of the RNNs. Other inputs include a
370
+ binary indicator for whether the current day is a holiday, a binary indicator for whether
371
+ yesterday was a holiday, a binary indicator for whether or not last week’s observation was
372
+ recorded on a holiday, and day number. The day number is a continuous counter for the
373
+ number of the given day in the data set, which would potentially allow the neural network
374
+ to detect trends across time. All told, these account for 42 vectors of input for the neural
375
+ networks. There is no need to create indicator columns for interactions between day of
376
+ week and period (or any other factors) as the neural network will automatically detect them.
377
+
378
+ Bianchi et al. (2017) application of hourly call volumes displays strong lag-24 correlation,
379
+ representing a period-of-day effect. They remove this seasonality by differencing the data
380
+ at lag-24. By contrast, we do not difference the call volumes, but instead include period-
381
+ of-day (along with day-of-week) as exogenous variables and allow the neural network to
382
+ detect this seasonality. This approach allows the network to detect the expected interactions
383
+ between period-of-day and day-of-week, as well as any other input factors.
384
+
385
+ While these input vectors are included for all models, there are three final input vectors
386
+ whose (joint) inclusion is treated as an experimental factor: the same-period predictions
387
+ from the mixed model approach (Aldor-Noiman, Feigin, & Mandelbaum, Workload
388
+ forecasting for a call center: Methodology and a case study, 2009), from a Winters
389
+ smoothing model, and from a seasonal ARIMA(1, 0, 1)(0, 1, 1)160 model. The inclusion
390
+ of the predictions from these models as an input to the neural network is an original
391
+ approach that gives the network the opportunity to form predictions that may be thought
392
+ of as corrections to those from these traditional models, based on potential interactions
393
+ with other included factors. For brevity, we refer to this as the mixed.cheat option, since it
394
+ allows the neural networks to “cheat” by looking at the predictions generated by these other
395
+ three models when forming its own predictions for the same time periods. If none of the
396
+ other 42 input vectors were included with these three, this would represent a supervised
397
+ learning approach to forming a dynamically weighted average of these three model
398
+ predictions in order to create a single “bagged” prediction.
399
+
400
+
401
+ 3.3 Network Configuration Aspects Treated as Experimental Factors
402
+ The designed experiment considered five different factors of structural settings of the
403
+ neural networks: model.type {dense, simple (Elman) RNN, GRU, LSTM}, nlayers {1,
404
+ 2}, nnodes (per layer) {25, 50, 75, 100}, kernel.L2.reg {0, 0.0001} and mixed.cheat
405
+ {FALSE, TRUE}. The L2 regularizer adds kernel.L2.reg times each weight coefficient to
406
+ the total loss function for the network. Similar to Lasso regression, this helps bound the
407
+ magnitude of the model coefficients and could potentially help prevent overfitting the
408
+ training data.
409
+
410
+ We fit a full factorial design (requiring 128 runs) for these five factors, which allows us
411
+ to test for the presence of up to five-way interactions between the five factors. Figure 6
412
+ shows the first 6 runs of the design. The design is replicated over the 3 largest splits and
413
+ across the 5 subsequent one-day ahead forecasts. This produces a total of 1920 runs for
414
+ the entire experiment. The replication allows for behavior to be averaged over different
415
+ days and for a more detailed exploration of how the variance of the error rates depends on
416
+ each factor.
417
+
418
+ Figure 6 First 6 runs of the designed experiment
419
+ 3.4 Static Considerations for Neural Network Configuration
420
+ The input for the classical dense neural network is a 640x42 matrix (640x45 if
421
+ mixed.cheat=TRUE). The dense network does not consider the temporal ordering of the
422
+ 640 observations (outside of the explicit inclusion of the day- and week-lagged
423
+ observations as inputs): the observations are shuffled after each epoch and then processed
424
+ in batches (we used a batch size of 32).
425
+
426
+ By contrast, the RNNs consider the ordering of the observations. The input for the RNNs
427
+ is a 20x32x42 array. This indicates to the RNN that there are 20 batches (days) of 32
428
+ timesteps (periods) with 42 predictors per time step. By default, the batches are treated
429
+ independently and the timesteps within each batch are potentially correlated (via the
430
+ persistence of the states in the RNN). If the batches themselves are presented in a
431
+ temporal order (as is the case in our application), then this can be indicated to the Keras
432
+ model via the STATEFUL=TRUE option and by disabling the shuffling of batches
433
+ during training. This retains the model weights from batch-to-batch (day-to-day) to allow
434
+ for possible long-term behavior. However, we found that using the STATEFUL option
435
+ led to a failure to converge in some skill*day combinations and the resulting validation
436
+ error rates were not significantly different from those generated without the STATEFUL
437
+ option. This seems to indicate that the dependence on prior days’ behavior is already
438
+ captured by the inclusion of the one-day and one-week lagged observations. Due to the
439
+ occasional convergence issues, the results in the sequel are generated with
440
+ STATEFUL=FALSE (and batch shuffling enabled during training). It would have also
441
+ been possible to fit the model with week- or month-long batches by training the RNN on
442
+ a 4x160x42 or a 1x640x42 array. We did not consider the week-long batches, but the
443
+
444
+ model.type
445
+ nlayers mixed.cheat nnodes
446
+ kernel.L2.reg
447
+ layer_simple_rnn
448
+ 2
449
+ FALSE
450
+ 50
451
+ 0.0001
452
+ layer_ gru
453
+ FALSE
454
+ 25
455
+ 0
456
+ layer_Istm
457
+ 1
458
+ TRUE
459
+ 100
460
+ 0
461
+ layer_gru
462
+ TRUE
463
+ 75
464
+ 0.0001
465
+ layer_gru
466
+ TRUE
467
+ 75
468
+ 0
469
+ layer_simple_rnn
470
+ FALSE
471
+ 100
472
+ 0month-long batches tended to produce inferior predictions to the day-long batches. This
473
+ could possibly be due to the lack of long term correlations in the data and the fact that the
474
+ smaller batches allow for the shuffling of the ordering that the days are fed through the
475
+ gradient-optimization routine of the neural network, which can improve the model fit by
476
+ preventing the model from overweighting the first observations that are provided to the
477
+ network in each epoch (Chollet & Allaire, 2018).
478
+
479
+ Note that the model weights are reset and the model is retrained for each skill. This is in
480
+ contrast to the approach taken by Zhu & Laptev (2017) in which a single network is fit to
481
+ accommodate disparate behavior from different cities. As discussed in Section 5, a single
482
+ multi-output network could potentially be built to model all of the skills at once.
483
+
484
+ While they were not included as experimental factors in this application, we also noticed
485
+ a significant relationship between the quality of the predictions and the optimization
486
+ routine employed. Extensive pilot experimentation led to our use of the AMSGrad variant
487
+ (Reddi, Kale, & Kumar, 2018) of the Adam optimizer (Kingma & Ba, 2014), with a
488
+ learning rate decay of 0.0001. We would recommend including both the optimizer and
489
+ the optional learning rate decay as factors in the designed experiment for parameter
490
+ tuning in future problems.
491
+
492
+ We found it was important to tune the number of epochs for each model fit using
493
+ validation data (the last week in the training set) by first fitting 500 epochs for each
494
+ model, taking a moving average (with a window size of 10 epochs) of the resulting
495
+ WAPEs on the validation data, and then refitting the model (on both the training and the
496
+ validation data in order to predict an additional day which was held out as a test set) with
497
+ the number of epochs that produced the minimum validation WAPE. The moving
498
+ average is important due to the volatile and non-monotonic behavior we observed in the
499
+ individual recorded WAPEs and helps to find a relatively stable region. Consistent with
500
+ previous findings (Bianchi, Maiorino, Kampffmeyer, Rizzi, & Jenssen, 2017), we noticed
501
+ that the RNNs take many more epochs to converge than the dense neural network. Other
502
+ researchers (such as Bianchi et al.) have used more than 500 epochs when fitting RNNs,
503
+ so this upper bound should also be considered as an important factor when building an
504
+ RNN.
505
+
506
+ While we also experimented with recurrent dropout (Gal & Ghahramani, 2015) to
507
+ prevent overfitting, it led to degraded performance in the early iterations of our
508
+ experiment and we removed it from consideration. However, this could simply be due to
509
+ features of this particular application, such as the relatively short training period of five
510
+ weeks: Chollet & Allaire (2018) strongly advocate the use of dropout and recurrent
511
+ dropout.
512
+
513
+ The models experienced improved errror rates after switching the kernel initializer for the
514
+ random weights to the He normal initializer (He, Zhang, Ren, & Sun, 2015). We used the
515
+ relu activation function exclusively, although this choice could also impact the quality of
516
+ the resulting model fit. And while our application did not detect any two-factor
517
+ interactions among the five experimental factors, it is possible that some of these
518
+ additional factors could depend on the type of (R)NN being used, meaning there would
519
+ be an interaction between these factors and model.type.
520
+
521
+ A final contributing factor is the batch size. This determines how many observations are
522
+ processed before the gradients are updated: when fitting Keras models on a GPU, amount
523
+
524
+ of memory available on the GPU can be a limiting factor on the batch size. This choice is
525
+ more constrained in the RNNs, where each batch is a single day/week/month (determined
526
+ by the number of timesteps specified in the input array to the RNN). By contrast, the
527
+ batch size for a dense network can be set between 1 and the number of observations in
528
+ the training data. We noticed significant differences in the error rates from the dense
529
+ model depending on what batch size was used.
530
+
531
+ 3.5 Experimental Response
532
+ While mean squared error (MSE) is frequently used to evaluate and compare predictive
533
+ models, this is a poor metric for the call center application as it will give undue focus to
534
+ the low call volume periods at the beginning and end of each day. Instead, the weighted
535
+ absolute percentage error (WAPE) is recommended for call volume modeling (Ibrahim,
536
+ Ye, L'Ecuyer, & Shen, 2016). This weights the absolute percentage error in each period
537
+ by the number of calls received in that period and is defined by
538
+ 𝑊𝐴𝑃𝐸 =
539
+
540
+ |𝑌𝑖 − 𝑌̂𝑖|
541
+ 𝑛
542
+ 𝑖=1
543
+
544
+ 𝑌𝑖
545
+ 𝑛
546
+ 𝑖=1
547
+
548
+ where 𝑌𝑖 and 𝑌̂𝑖 are the observed and predicted volumes, respectively, for each 30 minute
549
+ period 𝑖 = 1, … , 𝑛. Because WAPE is our metric of interest, the models are compiled to
550
+ use a mean absolute error loss-function, which minimizes the numerator of the WAPE
551
+ (the denominator is static).
552
+
553
+ For each row of the experimental table (Figure 6), the specified neural network is fit
554
+ (independently) to model five subsequent days of each split. That is, day 1 is predicted
555
+ using the previous 5 weeks leading up to day 1, then the model is reset and day 2 is
556
+ predicted using the 5 weeks leading up to day 2, etc. The vector of five one-day ahead
557
+ predictions is compared with the observed call volumes (that were not visible to the
558
+ model during training), and the resulting WAPE is recorded.
559
+
560
+ 3.6 Analysis of Experimental Results
561
+ Figure 7 gives a typical output of the prediction error across the different formulations of
562
+ the neural networks. GRU often has the lowest forecast error or at least is consistently
563
+ close to the lowest. The other procedures tend to have much more unstable performance
564
+ based on the choice of nodes and layers as well as across the days and splits.
565
+
566
+ Figure 7 Forecast error by model type and settings for forecast day 5 split 3
567
+ Regression analysis is used to determine the statistically significant factors and
568
+ interactions. In order to control a heavy right-skew in the recorded WAPEs, an inverse
569
+
570
+ model.type
571
+ NN Classic
572
+ RNN GRU
573
+ RNN LSTM RNN Simple
574
+ WAPE
575
+ WAPE
576
+ WAPE
577
+ WAPE
578
+ nlayers
579
+ nnodes
580
+ Mean
581
+ Mean
582
+ Mean
583
+ Mean
584
+ 1
585
+ 25
586
+ 6.7%
587
+ 6.2%
588
+ 8.5%
589
+ 6.0%
590
+ 50
591
+ 7.1%
592
+ 6.2%
593
+ 6.5%
594
+ 6.7%
595
+ 75
596
+ 7.1%
597
+ 5.8%
598
+ 6.8%
599
+ 5.9%
600
+ 100
601
+ 7.0%
602
+ 6.4%
603
+ 6.9%
604
+ 6.8%
605
+ 2
606
+ 25
607
+ 7.5%
608
+ 6.6%
609
+ 8.1%
610
+ 6.9%
611
+ 50
612
+ 7.3%
613
+ 5.8%
614
+ 6.9%
615
+ 7.0%
616
+ 75
617
+ 7.1%
618
+ 6.0%
619
+ 6.6%
620
+ 6.5%
621
+ 100
622
+ 8.0%
623
+ 5.9%
624
+ 7.5%
625
+ 6.0%transformation is applied to use as the response in the regression models. Figure 12
626
+ displays the original skewness and the transformation to normality after taking the
627
+ inverse of the WAPEs.
628
+
629
+ 3.7 Loglinear Variance Regression Model
630
+ The mean structure of 1/WAPE is modeled by including the main effects of the five
631
+ experimental factors (model.type, nlayers, mixed.cheat, nnodes, kernel.L2.reg) and all of
632
+ the interactions. In addition, split, file (which represents the validation day), and split*file
633
+ are included as blocking factors.
634
+
635
+ While an ordinary regression model for the full factorial design would assume the same
636
+ error variance for all responses across the design space, the loglinear variance model
637
+ allows for the error variance itself to be modelled as a function of the input factors. This
638
+ is appealing for this application, where parsimonious networks may be expected to
639
+ demonstrate less variability during repeated fittings.
640
+
641
+ For each factor setting, the loglinear variance model produces a predicted mean of
642
+ 1/WAPE and a predicted standard deviation of 1/WAPE, representing the variability due
643
+ to the random initial weighting of the NN.
644
+ We include model.type, nlayers, mixed.cheat, nnodes, kernel.L2.reg, file, and split as
645
+ factors in the loglinear variance model (using JMP Pro 14.1), which models the log of the
646
+ error variance as a linear combination of these factors (which are all treated as
647
+ categorical). With the exception of kernel.L2.reg, all of these effects are found to be
648
+ significantly associated with the error variance (Figure 8).
649
+
650
+ Figure 8 Factors with a significant impact on WAPE variability
651
+ For the mean model, model.type, nlayers, mixed.cheat, nnodes, day, split, and file*split
652
+ were found to be significantly associated with 1/WAPE (Figure 9). Neither kernel.L2.reg
653
+ nor any of the interaction terms involving the NN architecture were found to be
654
+ significant.
655
+
656
+
657
+
658
+ Variance Effect Likelihood Ratio Tests
659
+ Source
660
+ Test Type
661
+ DF
662
+ ChiSquare
663
+ Prob>ChiSq
664
+ model.type
665
+ Likelihood
666
+ 3
667
+ 114.126
668
+ <.0001*
669
+ nlayers
670
+ Likelihood
671
+ 1
672
+ 4.6333
673
+ 0.0314*
674
+ mixed.cheat
675
+ Likelihood
676
+ 1
677
+ 53.6884
678
+ <.0001*
679
+ nnodes
680
+ Likelihood
681
+ 3
682
+ 16.5624
683
+ 0.0009*
684
+ kernel.L2.reg
685
+ Likelihood
686
+ 1
687
+ 0.0042
688
+ 0.9482
689
+ file
690
+ Likelihood
691
+ 4
692
+ 36.8213
693
+ <.0001*
694
+ split
695
+ Likelihood
696
+ 2
697
+ 56.3123
698
+ <.0001*
699
+ Figure 9 Factors that are significantly associated with the mean WAPE
700
+ Removing the insignificant interaction terms (but allowing kernel.L2.reg to remain in
701
+ both the mean and variance models) produces the profiler shown in Figure 10. Notice the
702
+ large standard deviation associated with the LSTM model. The dependence on file and
703
+ split is not shown, since there are no interactions modeled between these and the other
704
+ experimental factors. While we want to maximize 1/WAPE, we also want to minimize
705
+ the standard deviation of 1/WAPE. That is, we would like to maximize the lower 95%
706
+ prediction interval of 1/WAPE (or minimize the upper 95% PI of WAPE) in order to find
707
+ the model configuration that will give a combination of relatively good results on average
708
+ while being protected against large errors.
709
+
710
+ Fixed Effect Tests
711
+ Source
712
+ Nparm
713
+ DF
714
+ DFDen
715
+ F Ratio
716
+ Prob > F
717
+ model.type
718
+ m
719
+ m
720
+ 796.5
721
+ 44.6120
722
+ <.0001*
723
+ nlayers
724
+ 1
725
+ 1
726
+ 1258
727
+ 5.5213
728
+ 0.0189*
729
+ model.type*nlayers
730
+ 3
731
+ m
732
+ 796.5
733
+ 1.0691
734
+ 0.3614
735
+ mixed.cheat
736
+ 1
737
+ 1
738
+ 1258
739
+ 17.0652
740
+ <.0001*
741
+ model.type*mixed.cheat
742
+ 3
743
+ 3
744
+ 796.5
745
+ 1.3181
746
+ 0.2672
747
+ nlayers*mixed.cheat
748
+ 1
749
+ 1
750
+ 1258
751
+ 2.3413
752
+ 0.1262
753
+ model.type*nlayers*mixed.cheat
754
+ 3
755
+ 3
756
+ 796.5
757
+ 1.9528
758
+ 0.1196
759
+ nnodes
760
+ m
761
+ 786.2
762
+ 4.5600
763
+ 0.0036*
764
+ model.type*nnodes
765
+ 9
766
+ 5
767
+ 839.4
768
+ 0.7864
769
+ 0.6290
770
+ nlayers*nnodes
771
+ 3
772
+ 3
773
+ 786.2
774
+ 0.7371
775
+ 0.5301
776
+ model.type*nlayers*nnodes
777
+ 9
778
+ 6
779
+ 8394
780
+ 0.6668
781
+ 0.7395
782
+ mixed.cheat*nnodes
783
+ 3
784
+ 3
785
+ 786.2
786
+ 0.6624
787
+ 0.5753
788
+ model.type*mixed.cheat*nnodes
789
+ 9
790
+ 5
791
+ 839.4
792
+ 0.2224
793
+ 0.9913
794
+ nlayers*mixed.cheat*nnodes
795
+ m
796
+ 786.2
797
+ 0.9291
798
+ 0.4261
799
+ model.type*nlayers*mixed.cheat*nnodes
800
+ 9
801
+ 9
802
+ 839.4
803
+ 0.4312
804
+ 0.9186
805
+ kernel.L2.reg
806
+ 1
807
+ 1
808
+ 1258
809
+ 0.0026
810
+ 0.9590
811
+ model.type*kernel.L2.reg
812
+ m
813
+ 796.5
814
+ 0.1263
815
+ 0.9445
816
+ nlayers'kernel.L2.reg
817
+ 1
818
+ 1
819
+ 1258
820
+ 0.2313
821
+ 0.6306
822
+ model.type*nlayers*kernel.L2.reg
823
+ 3
824
+ m
825
+ 796.5
826
+ 0.7812
827
+ 0.5046
828
+ mixed.cheat*kernel.L2.reg
829
+ 1
830
+ 1
831
+ 1258
832
+ 0.0295
833
+ 0.8636
834
+ model.type*mixed.cheat*kernel.L2.reg
835
+ 3
836
+ m
837
+ 796.5
838
+ 0.1395
839
+ 0.9364
840
+ nlayers*mixed.cheat*kernel.L2.reg
841
+ 1
842
+ 1
843
+ 1258
844
+ 0.1498
845
+ 0.6988
846
+ model.type*nlayers*mixed.cheat*kernel.L2.reg
847
+ 3
848
+ 3
849
+ 796.5
850
+ 0.3776
851
+ 0.7692
852
+ nnodes*kernel.L2.reg
853
+ m
854
+ m
855
+ 786.2
856
+ 0.9118
857
+ 0.4347
858
+ model.type*nnodes*kemel.L2.reg
859
+ 9
860
+ 6
861
+ 839.4
862
+ 0.8905
863
+ 0.5331
864
+ nlayers*nnodes*kernel.L2.reg
865
+ 3
866
+ m
867
+ 786.2
868
+ 0.7442
869
+ 0.5259
870
+ model.type*nlayers*nnodes*kernel.L2.reg
871
+ 9
872
+ 839.4
873
+ 0.6535
874
+ 0.7513
875
+ mixed.cheat*nnodes*kernel.L2.reg
876
+ 3
877
+ 3
878
+ 786.2
879
+ 0.0812
880
+ 0.9703
881
+ model.type*mixed.cheat*nnodes*kernel.L2.reg
882
+ 9
883
+ 8394
884
+ 0.2691
885
+ 0.9827
886
+ nlayers*mixed.cheat*nnodes*kernel.L2.reg
887
+ 3
888
+ m
889
+ 786.2
890
+ 0.8510
891
+ 0.4662
892
+ model.type*nlayers*mixed.cheat*nnodes*kernel.L2.reg
893
+ 9
894
+ 6
895
+ 8394
896
+ 0.6614
897
+ 0.7442
898
+ file
899
+ 4
900
+ 4
901
+ 668.1
902
+ 1258.316
903
+ <.0001*
904
+ ds
905
+ 2
906
+ 2
907
+ 857.3
908
+ 1867.911
909
+ <.0001*
910
+ file*split
911
+ 8
912
+ 8
913
+ 746.8
914
+ 244.5502
915
+ <.0001*
916
+ Figure 10 Mean and Standard Deviation of 1/WAPE as a function of the experimental
917
+ factors. Goal is to maximize the top (1/mean forecast error) and minimize the bottom
918
+ (standard deviation)
919
+ Figure 11 plots the upper 95% prediction interval for WAPE against the experimental
920
+ factors. The model configuration that minimizes the upper 95% PI of WAPE is a GRU
921
+ model that is allowed to use the MIXED, ARIMA, and Winters predictions as inputs,
922
+ with 2 layers and 50 nodes per layer and an L2 penalty of 0.0001 on the kernel weights.
923
+ However, the L2 penalty was not found to be significant in either the mean or the
924
+ variance models and the contribution of nlayers is relatively flat.
925
+
926
+ Figure 11 Upper 95% Prediction Interval for WAPE
927
+ To summarize, the designed experiment provided insight to effectively select the
928
+ appropriate levels across several neural network models and parameters. The goal is to
929
+ balance model performance in minimizing forecast error (that is, maximizing 1/forecast
930
+ error) and minimizing the variance of this forecast error. Figure 10 and Figure 11 are
931
+ interpretable across all factors and settings as displayed due to the absence of significant
932
+ interactions between the factors. The top half of Figure 10 displays the reciprocal forecast
933
+ error (the larger the value the better) which indicates preferred settings of GRU or Elman
934
+ (simple) for the model, 1 layer, using the mixed model forecasts, and 25 nodes while the
935
+ selection of L2.kernel makes little difference. The lower half of Figure 10 displays the
936
+ variance (lower is better) where the traditional neural net would be preferred along with 2
937
+
938
+ Prediction Profiler
939
+ 30 -
940
+ 28.87292
941
+ 29
942
+ [27.9033,
943
+ 28
944
+ 29.8426]
945
+ 27
946
+ 26
947
+ Dev
948
+ 2.5
949
+ [1.66636,
950
+ 1.5
951
+ dense
952
+ layer_gru
953
+ layer_Istm
954
+ 2
955
+ FALSE-
956
+ TRUE-
957
+ 5
958
+ 0.0001
959
+ layer_gru
960
+ 2
961
+ TRUE
962
+ 50
963
+ 0.0001
964
+ model.type
965
+ nlayers
966
+ mixed.cheat
967
+ nnodes
968
+ kemel.L2.regPrediction Profiler
969
+ 0.044
970
+ 0.0435
971
+ 0.043
972
+ 0.040571
973
+ 0.0425
974
+ 0.042
975
+ 0.0415
976
+ 0.041
977
+ 0.0405
978
+ dense
979
+ layer_gru
980
+ layer_Istm
981
+ layer_simple_rnn
982
+ 2
983
+ FALSE-
984
+ TRUE-
985
+ 5
986
+ 9
987
+ 5
988
+ 100-
989
+ 0.0001
990
+ layer_gru
991
+ TRUE
992
+ 50
993
+ 0.0001
994
+ model.type
995
+ nlayers
996
+ mixed.cheat
997
+ nnodes
998
+ kenel.L2.reglayers, using mixed model forecasts, with 50 nodes while robust to the regularization
999
+ parameter value. Note that the traditional dense neural network does have consistently
1000
+ worse forecast error across all scenarios and could not be recommended despite having
1001
+ the lowest variance. The prediction interval on forecast error is an alternative view that is
1002
+ preferred by many practitioners where the goal is to minimize the width. Figure 11 (lower
1003
+ is better) clearly shows GRU is the preferred solution using the mixed model forecasts
1004
+ with 50 nodes.
1005
+
1006
+
1007
+ Figure 12 Representative distribution of WAPE and 1/WAPE resulting from the designed
1008
+ experiment
1009
+
1010
+ 4. Comprehensive Performance Study
1011
+
1012
+ Based on the results of the designed experiment for NN error rates, we perform a further
1013
+ study with actual call center data across 36 skills rather than only 3. We consider a
1014
+ consistent 5-week training period advancing across multiple months of data to produce 60
1015
+ one-day ahead predictions. These predictions do not use the actual data for the forecasted
1016
+ day during model training and can be viewed as a validation set for the trained models.
1017
+ This allows us to compare the relative performance of the mixed model and the neural
1018
+ networks. We also include the error rates from the ARIMA and Winters seasonal
1019
+ smoothing models for comparison.
1020
+
1021
+ Though we could have simply chosen the GRU RNN with 50 nodes on each of 2 layers
1022
+ with kernel.L2.reg complexity parameter set to 0.0001 as the only representative RNN
1023
+ based on the designed experiment, we decided to include all 4 neural network methods
1024
+
1025
+ Distributions
1026
+ WAPE
1027
+ 0 0.1
1028
+ 0.3
1029
+ 0.5
1030
+ 0.7
1031
+ 0.9 1 1.1
1032
+ 1.3
1033
+ 1.5
1034
+ 1/WAPE
1035
+ 0
1036
+ 5
1037
+ 10
1038
+ 15
1039
+ 20
1040
+ 25
1041
+ 30screened by using these same parameter settings. It is possible with the added skills –
1042
+ with many having much lower call volumes – that another RNN model could work better
1043
+ than GRU.
1044
+
1045
+ 4.1 Results: One-day ahead Predictions Over 60 Separate Validation Days
1046
+ Figure 13 provides the forecast errors averaged across all 60 days for the 12 splits with
1047
+ the highest call volume sorted in descending call volume order. Note that results in this
1048
+ section are generated by the (R)NNs with the mixed.cheat option disabled. The Doubly
1049
+ Stochastic Mixed Model has the lowest average forecast error for all but Split 5540 and is
1050
+ often significantly lower than the competitors. Winters Seasonal Exponential Smoothing
1051
+ also performs quite well given its relative simplicity. The GRU performance confirms the
1052
+ results from the designed experiment as usually the best RNN and always competitive
1053
+ with the best. The highly complex LSTM recurrent neural network has very large forecast
1054
+ errors for many of these splits and cannot be recommended. Note also that forecast error
1055
+ generally increases for all methods as call volume decreases.
1056
+
1057
+ Figure 13 Average WAPE forecast errors across 60 separate validation days for high
1058
+ call-volume splits
1059
+ Figure 14 shows the similar trend of increasing error rates with decreasing call volumes
1060
+ for the medium call volume splits. The GRU recurrent neural network is usually
1061
+ outperforming the other neural network methods and is closer to the error rates of the
1062
+ Doubly Stochastic Mixed Model.
1063
+
1064
+ Figure 14 Average WAPE forecast errors across 60 separate validation days for medium
1065
+ call-volume splits
1066
+ For the low call volume splits displayed in Figure 15, the GRU and Simple recurrent
1067
+ neural networks perform similarly and slightly better than Doubly Stochastic and Winters
1068
+ for most of the splits. The very low call volumes (last 3 splits) do seem to benefit from
1069
+ the recurrent neural network formulation.
1070
+
1071
+ Split
1072
+ Sum Call Vol
1073
+ ARIMA
1074
+ Doubly Stoch
1075
+ NN_Classic
1076
+ RNN_GRU
1077
+ RNN_LSTM
1078
+ RNN_Simple
1079
+ Winters
1080
+ 5000&5240
1081
+ 929754
1082
+ 9.0%
1083
+ 8.4%
1084
+ 10.2%
1085
+ 11.2%
1086
+ 38.0%
1087
+ 13.9%
1088
+ 8.3%
1089
+ 5400&5570
1090
+ 461256
1091
+ 7.4%
1092
+ 6.6%
1093
+ 8.8%
1094
+ 8.4%
1095
+ 26.1%
1096
+ 14.3%
1097
+ 7.1%
1098
+ 5620
1099
+ 162996
1100
+ 8.6%
1101
+ 7.7%
1102
+ 9.6%
1103
+ 9.1%
1104
+ 28.7%
1105
+ 9.6%
1106
+ 8.4%
1107
+ 5660
1108
+ 137759
1109
+ 10.2%
1110
+ 10.1%
1111
+ 13.2%
1112
+ 12.0%
1113
+ 17.1%
1114
+ 14.0%
1115
+ 10.5%
1116
+ 5020
1117
+ 71930
1118
+ 11.5%
1119
+ 11.2%
1120
+ 14.5%
1121
+ 12.3%
1122
+ 27.3%
1123
+ 14.2%
1124
+ 11.7%
1125
+ 5630
1126
+ 65079
1127
+ 15.4%
1128
+ 14.9%
1129
+ 16.7%
1130
+ 15.5%
1131
+ 26.4%
1132
+ 15.5%
1133
+ 14.1%
1134
+ 5840
1135
+ 63984
1136
+ 13.1%
1137
+ 12.1%
1138
+ 14.6%
1139
+ 13.4%
1140
+ 25.3%
1141
+ 12.9%
1142
+ 13.0%
1143
+ 5200
1144
+ 48728
1145
+ 13.2%
1146
+ 13.4%
1147
+ 15.6%
1148
+ 13.5%
1149
+ 52.4%
1150
+ 13.5%
1151
+ 13.6%
1152
+ 5540
1153
+ 38236
1154
+ 15.9%
1155
+ 15.8%
1156
+ 16.8%
1157
+ 16.3%
1158
+ 28.5%
1159
+ 16.6%
1160
+ 16.0%
1161
+ 5670
1162
+ 34793
1163
+ 14.0%
1164
+ 13.2%
1165
+ 15.6%
1166
+ 14.2%
1167
+ 27.3%
1168
+ 13.9%
1169
+ 14.0%
1170
+ 6500
1171
+ 30534
1172
+ 16.8%
1173
+ 16.4%
1174
+ 18.5%
1175
+ 17.3%
1176
+ 23.0%
1177
+ 18.0%
1178
+ 17.1%
1179
+ 5260
1180
+ 23849
1181
+ 21.9%
1182
+ 20.9%
1183
+ 22.7%
1184
+ 20.3%
1185
+ 32.5%
1186
+ 20.2%
1187
+ 21.2%Split
1188
+ Sum Call Vol
1189
+ ARIMA
1190
+ Doubly Stoch
1191
+ NNClassic
1192
+ RNN_GRU
1193
+ RNN_LSTM
1194
+ RNN_Simple
1195
+ Winters
1196
+ 5460
1197
+ 20922
1198
+ 18.2%
1199
+ 18.1%
1200
+ 19.5%
1201
+ 18.1%
1202
+ 26.0%
1203
+ 17.7%
1204
+ 18.2%
1205
+ 5410
1206
+ 19461
1207
+ 19.8%
1208
+ 19.2%
1209
+ 21.4%
1210
+ 20.0%
1211
+ 59.1%
1212
+ 20.8%
1213
+ 19.7%
1214
+ 6350
1215
+ 16874
1216
+ 30.8%
1217
+ 26.8%
1218
+ 29.3%
1219
+ 28.7%
1220
+ 31.1%
1221
+ 29.0%
1222
+ 29.4%
1223
+ 5060
1224
+ 16765
1225
+ 19.3%
1226
+ 19.1%
1227
+ 20.7%
1228
+ 19.6%
1229
+ 24.4%
1230
+ 19.6%
1231
+ 19.6%
1232
+ 5650
1233
+ 9911
1234
+ 23.8%
1235
+ 23.4%
1236
+ 24.6%
1237
+ 24.0%
1238
+ 27.3%
1239
+ 24.3%
1240
+ 24.0%
1241
+ 5030
1242
+ 8102
1243
+ 40.0%
1244
+ 26.0%
1245
+ 28.5%
1246
+ 27.8%
1247
+ 33.2%
1248
+ 27.7%
1249
+ 26.9%
1250
+ 5680
1251
+ 7525
1252
+ 27.3%
1253
+ 27.1%
1254
+ 28.2%
1255
+ 26.7%
1256
+ 31.9%
1257
+ 27.8%
1258
+ 27.7%
1259
+ 5440
1260
+ 7402
1261
+ 28.5%
1262
+ 28.1%
1263
+ 30.3%
1264
+ 28.3%
1265
+ 33.8%
1266
+ 29.7%
1267
+ 29.2%
1268
+ 5070
1269
+ 6446
1270
+ 34.6%
1271
+ 34.5%
1272
+ 34.6%
1273
+ 33.5%
1274
+ 43.6%
1275
+ 35.0%
1276
+ 34.7%
1277
+ 5420
1278
+ 5247
1279
+ 35.0%
1280
+ 34.6%
1281
+ 36.2%
1282
+ 33.9%
1283
+ 38.1%
1284
+ 34.7%
1285
+ 35.0%
1286
+ 5899
1287
+ 4844
1288
+ 36.4%
1289
+ 35.8%
1290
+ 37.0%
1291
+ 35.2%
1292
+ 64.8%
1293
+ 36.6%
1294
+ 36.7%
1295
+ 5100
1296
+ 4019
1297
+ 34.7%
1298
+ 34.0%
1299
+ 36.6%
1300
+ 34.3%
1301
+ 40.1%
1302
+ 35.5%
1303
+ 34.8%
1304
+ Figure 15 Average WAPE forecast errors across 60 separate validation days for low
1305
+ call-volume splits
1306
+ Overall, the best performing procedures were the Doubly Stochastic Mixed Model and
1307
+ the GRU Recurrent Neural Network. Figure 16 shows forecast error by split sorted by
1308
+ call volume. Generally, the mixed model does better for large and medium volume splits
1309
+ (lower is better on the graph) while the RNN
1310
+ model is more effective for the small volume—particularly the very small volume splits.
1311
+
1312
+
1313
+ Figure 16 Forecast error rates ordered by call volume by split for Doubly Stochastic
1314
+ (blue) and GRU (red)
1315
+ Figure 17 presents a different view of this same pattern. For each of the 60 one-day ahead
1316
+ forecasts within each split, the GRU and Doubly Stochastic WAPEs are recorded, along
1317
+ with the number of calls recorded for that split over the training and validation data
1318
+ (sum_all_calls). For each split, Figure 17 plots the percent of the 60 day-ahead forecasts
1319
+ for which GRU “won” (GRU WAPE < Doubly Stochastic WAPE) against the log of the
1320
+ median call volume recorded by that split over the 60 different pairs of training and
1321
+ validation data. There appears to be a linear decrease of the relative performance of GRU
1322
+ against Doubly Stochastic in the log of the call volume.
1323
+
1324
+ Split
1325
+ Sum Call Vol
1326
+ ARIMA
1327
+ Doubly Stoch
1328
+ NN_Classic
1329
+ RNN_GRU
1330
+ RNN_LSTM
1331
+ RNN_Simple
1332
+ Winters
1333
+ 5710
1334
+ 3742
1335
+ 39.6%
1336
+ 38.8%
1337
+ 40.7%
1338
+ 39.1%
1339
+ 45.7%
1340
+ 40.2%
1341
+ 38.9%
1342
+ 5820
1343
+ 2949
1344
+ 44.1%
1345
+ 43.1%
1346
+ 46.1%
1347
+ 43.2%
1348
+ 50.4%
1349
+ 44.0%
1350
+ 43.4%
1351
+ 5690
1352
+ 2238
1353
+ 56.1%
1354
+ 51.1%
1355
+ 51.6%
1356
+ 51.2%
1357
+ 58.9%
1358
+ 51.8%
1359
+ 52.5%
1360
+ 5220
1361
+ 2089
1362
+ 49.7%
1363
+ 49.5%
1364
+ 50.5%
1365
+ 49.3%
1366
+ 54.0%
1367
+ 50.1%
1368
+ 49.7%
1369
+ 5470
1370
+ 1975
1371
+ 49.2%
1372
+ 48.8%
1373
+ 52.0%
1374
+ 50.5%
1375
+ 57.0%
1376
+ 50.7%
1377
+ 50.2%
1378
+ 6330
1379
+ 1556
1380
+ 60.5%
1381
+ 60.1%
1382
+ 61.0%
1383
+ 60.2%
1384
+ 65.1%
1385
+ 61.6%
1386
+ 60.7%
1387
+ 5040
1388
+ 1398
1389
+ 79.0%
1390
+ 69.0%
1391
+ 69.8%
1392
+ 73.4%
1393
+ 72.2%
1394
+ 71.1%
1395
+ 80.7%
1396
+ 6310
1397
+ 922
1398
+ 98.5%
1399
+ 92.0%
1400
+ 92.2%
1401
+ 88.1%
1402
+ 91.0%
1403
+ 89.5%
1404
+ 92.8%
1405
+ 5720
1406
+ 918
1407
+ 80.2%
1408
+ 77.8%
1409
+ 77.5%
1410
+ 77.6%
1411
+ 85.0%
1412
+ 78.4%
1413
+ 80.1%
1414
+ 6370
1415
+ 485
1416
+ 95.3%
1417
+ 93.6%
1418
+ 96.5%
1419
+ 91.3%
1420
+ 111.3%
1421
+ 94.0%
1422
+ 95.2%
1423
+ 6360
1424
+ 171
1425
+ 150.4%
1426
+ 136.8%
1427
+ 126.5%
1428
+ 107.7%
1429
+ 118.7%
1430
+ 111.7%
1431
+ 142.5%
1432
+ 6340
1433
+ 14
1434
+ 189.7%
1435
+ 161.4%
1436
+ 188.2%
1437
+ 102.7%
1438
+ 108.9%
1439
+ 107.8%
1440
+ 187.7%Forecast ErrorbySplit AverageOver 6oDays
1441
+ 1.6
1442
+ IDoubly Stoch
1443
+ 1.5
1444
+ RNN GRU
1445
+ 1.4
1446
+ 1.3
1447
+ 1.2
1448
+ 1.1 -
1449
+ Errot
1450
+ aber
1451
+ 1.0
1452
+ 0.9
1453
+ 0.8
1454
+ 0.7
1455
+ pa
1456
+ ubiay
1457
+ 0.6
1458
+ 0.5
1459
+ 0.4
1460
+ 0.3
1461
+ 0.2
1462
+ 0.1
1463
+ HighCallVolum
1464
+ Medium CallVolume
1465
+ Low Call Volume
1466
+ Figure 17 Percent of the 60 day-ahead forecasts for which the GRU WAPE was less than
1467
+ the Doubly Stochastic WAPE for each of the 36 splits plotted against the log of the
1468
+ median sum of all calls recorded for each split over each pair of training and validation
1469
+ data.
1470
+ For the large call-volume splits, the extra flexibility of the GRU model does not lead to
1471
+ improvements over the predictions generated by the mixed model approach. Because the
1472
+ mixed model computations are faster and the implementation is less complex, there does
1473
+ not appear to be any benefit to running the neural networks for the high-volume splits for
1474
+ this short-term application. This is consistent with the findings of the Uber traffic volume
1475
+ study when short-term predictions were considered (Zhu & Laptev, 2017), and would
1476
+ possibly change if a longer training period (several months or multiple years) were used
1477
+ for the call center data.
1478
+
1479
+ 4.2 Improving GRU RNN by Using Doubly Stochastic Forecasts as a Covariate
1480
+ Based on pilot studies during the designed experiment, the forecasting performance of the
1481
+ GRU recurrent neural network often improved by integrating the forecasted value for the
1482
+ validation days from the doubly stochastic, ARIMA, and Winters models. This
1483
+ “cheating” by using other models’ forecasts (shown as mixed.cheat) proved to be a
1484
+ significant benefit for the GRU model over these 60 predictions for each skill. The
1485
+ WAPE for the held-out validation data was again the primary measure of performance.
1486
+
1487
+ Figure 18 displays the forecast errors for the high call volume skills averaged over the 60
1488
+ validation days for the each of the neural networks when they use the mixed forecasts.
1489
+ Note the side-by-side comparison of the RNN_GRU (no cheat) and GRU_cheat columns
1490
+ where in most cases the “cheating” does result in improved forecasts and in those cases
1491
+ where it is not better, it has only marginally declined. Additionally, the Simple_cheat
1492
+ error rates compare favorably with the GRU_cheat while the LSTM_cheat suffers from
1493
+ significantly poorer performance and instability issues. The doubly stochastic forecast is
1494
+ still quite good and often the best choice. These results are also consistent when looking
1495
+ at the medium and low call volume splits. Therefore, we recommend using the forecasts
1496
+ from a doubly stochastic or Winters model as inputs to recurrent neural networks.
1497
+
1498
+ Percent won by GRU vs. Log[Median(sum all calls)]
1499
+ 100%
1500
+ Percent won by GRU
1501
+ %06
1502
+ %08
1503
+ 70%
1504
+ Percent won by GRU
1505
+ %09
1506
+ 50%
1507
+ 40%
1508
+ 30%
1509
+ 20%
1510
+ 10%
1511
+ 0%
1512
+ 4
1513
+ 6
1514
+ 8
1515
+ 10
1516
+ 12
1517
+ 14
1518
+ Log[Median(sum all_ calls)]
1519
+ Figure 18 Forecast errors for high call volume splits averaged over 60 validation
1520
+ forecast days allowing NN to “cheat” to improve predictions
1521
+ The median of the GRU WAPE (mixed.cheat=FALSE) minus the GRU WAPE
1522
+ (mixed.cheat=TRUE) (within each split/day combination, for a sample size of
1523
+ 36*60=2160) is 0.002 with a p-value of 0.0005 from the Wilcoxon signed rank test with a
1524
+ null hypothesis that the differences were drawn from a population with median equal to
1525
+ 0, indicating that mixed.cheat does tend to improve the performance of the GRU model.
1526
+
1527
+ A similar comparison of paired differences of error rates (all with mixed.cheat=FALSE)
1528
+ confirms that GRU outperforms the other (R)NNs: LSTM - GRU produces a median of
1529
+ 0.0247 with a p-value of 1e-129, NN Classic - GRU produces a median of 0.0690 with a
1530
+ p-value of <1e-185, and Simple RNN - GRU produces a median of 0.0021 with a p-value
1531
+ of 1e-04.
1532
+
1533
+
1534
+ References
1535
+ Aldor-Noiman, S., Feigin, P. D., & Mandelbaum, A. (2009). Workload forecasting for a
1536
+ call center: Methodology and a case study. Annals of Applied Statistics, 3(4),
1537
+ 1403-1447.
1538
+ Aldor-Noiman, S., Feigin, P. D., & Mandelbaum, A. (2009). Workload forecasting for a
1539
+ call center: Methodology and a case study. Annals of Applied Statistics(4), 1403–
1540
+ 1447.
1541
+ Allaire, J., & Chollet, F. (2018). keras: R Interface to 'Keras'. Retrieved from CRAN:
1542
+ https://CRAN.R-project.org/package=keras
1543
+ Bianchi, F. M., Maiorino, E., Kampffmeyer, M. C., Rizzi, A. R., & Jenssen, R. (2017).
1544
+ Recurrent Neural Networks for Short-Term Load Forecasting: An Overview and
1545
+ Comparative Analysis. Cham: Springer.
1546
+ Bianchi, F. M., Maiorino, E., Kampffmeyer, M. C., Rizzi, A. R., & Jenssen, R. (2018).
1547
+ An overview and comparative analysis of Recurrent Neural Networks for Short
1548
+ Term Load Forecasting. arXiv. Retrieved from https://arxiv.org/abs/1705.04378
1549
+ Box, G., & Jenkins, G. (1970). Time series analysis: Forecasting and control. San
1550
+ Francisco, CA: Holden-Day.
1551
+
1552
+ Split
1553
+ Sum Call Vol
1554
+ Doubly Stoch
1555
+ RNN_GRU
1556
+ GRU_cheat
1557
+ LSTM_cheat
1558
+ Simple_cheat
1559
+ 5000&5240
1560
+ 926493
1561
+ 6.7%
1562
+ 10.2%
1563
+ 7.8%
1564
+ 14.4%
1565
+ 7.8%
1566
+ 5400&5570
1567
+ 459961
1568
+ 6.3%
1569
+ 8.1%
1570
+ 7.4%
1571
+ 9.4%
1572
+ 6.9%
1573
+ 5620
1574
+ 164138
1575
+ 7.9%
1576
+ 8.9%
1577
+ 8.8%
1578
+ 15.2%
1579
+ 8.7%
1580
+ 5660
1581
+ 138389
1582
+ 9.2%
1583
+ 12.0%
1584
+ 9.9%
1585
+ 18.1%
1586
+ 9.8%
1587
+ 5020
1588
+ 71900
1589
+ 11.2%
1590
+ 12.4%
1591
+ 12.4%
1592
+ 27.0%
1593
+ 12.3%
1594
+ 5630
1595
+ 65279
1596
+ 11.5%
1597
+ 13.5%
1598
+ 13.7%
1599
+ 421.5%
1600
+ 13.2%
1601
+ 5840
1602
+ 63687
1603
+ 12.0%
1604
+ 13.2%
1605
+ 13.1%
1606
+ 26.1%
1607
+ 12.7%
1608
+ 5200
1609
+ 48624
1610
+ 13.0%
1611
+ 14.2%
1612
+ 14.4%
1613
+ 28.1%
1614
+ 14.0%
1615
+ 5540
1616
+ 38318
1617
+ 17.1%
1618
+ 16.3%
1619
+ 16.9%
1620
+ 20.8%
1621
+ 17.5%
1622
+ 5670
1623
+ 34741
1624
+ 13.4%
1625
+ 14.7%
1626
+ 14.4%
1627
+ 24.5%
1628
+ 14.2%
1629
+ 6500
1630
+ 30352
1631
+ 16.3%
1632
+ 17.5%
1633
+ 16.8%
1634
+ 35.2%
1635
+ 17.2%
1636
+ 5260
1637
+ 23674
1638
+ 20.7%
1639
+ 21.3%
1640
+ 20.5%
1641
+ 28.0%
1642
+ 20.8%Cho, K., Merrienboer, B. V., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., &
1643
+ Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder
1644
+ for statistical machine translation. arXive, arXiv:1406.1078.
1645
+ Chollet, F., & Allaire, J. (2018). Deep Learning with R. Shelter Island: Manning
1646
+ Publications Co.
1647
+ Demidenko, E. (2013). Mixed Models Theory and Applications with R. Hoboken, NJ:
1648
+ John Wiley & Sons, Inc.
1649
+ Elman, J. L. (1990). Finding Structure in Time. Cognitive Science, 14, 179-211.
1650
+ Gal, Y., & Ghahramani, Z. (2015). A Theoretically Grounded Application of Dropout in
1651
+ Recurrent Neural Networks. arXiv. Retrieved from
1652
+ https://arxiv.org/abs/1512.05287
1653
+ Gans, N., Koole, G., & Mandelbaum, A. (2003). Telephone call centers: tutorial, review,
1654
+ and research prospects. Manufacturing and Service Operations Management,
1655
+ 5:79-141.
1656
+ Gans, N., Koole, G., & Mandelbaum, A. (2003). Telephone Call Centers: Tutorial,
1657
+ Review, and Research Prospects. Manufacturing & Service Operations
1658
+ Management, 79-141.
1659
+ Harvey, A. (1990). Forecasting, Structural Time Series Models and the Kalman Filter.
1660
+ London: Cambridge University Press.
1661
+ He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving Deep into Rectifiers: Surpassing
1662
+ Human-Level Performance on ImageNet Classification. arXiv. Retrieved from
1663
+ https://arxiv.org/abs/1502.01852
1664
+ Hochrieter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural
1665
+ Computation, 9(8), 1735-1780.
1666
+ Ibrahim, R., & L'Ecuyer, P. (2013). Forecasting call center arrivals: Fixed-effects, mixed-
1667
+ effects, and bivariate models. Manufacturing & Service Operations Management,
1668
+ 72-85.
1669
+ Ibrahim, R., Ye, H., L'Ecuyer, P., & Shen, H. (2016). Modeling and forecasting call
1670
+ center arrrivals: A literature survey and a case study. International Journal of
1671
+ Forecasting, 865-874.
1672
+ Karl, A. T., Yang, Y., & Lohr, S. L. (2013). Efficient maximum likelihood estimation of
1673
+ multiple membership linear mixed models, with an application to educational
1674
+ value-added assessments. Computational Statistics & Data Analysis, 59, 13-27.
1675
+ Kenward, M. G., & Roger, J. H. (2009). An Improved Approximation to the Precision of
1676
+ Fixed Effects from Restricted Maximum Likelihood. Computational Statistics
1677
+ and Data Analysis(53), 2583–2595.
1678
+ Kingma, D., & Ba, J. (2014). Adam: A Method for Stochastic Optimization. arXiv.
1679
+ Retrieved from https://arxiv.org/abs/1412.6980v8
1680
+
1681
+ Laptev, N., Yosinski, J., Li, L. E., & Smyl, S. (2017). Time-series Extreme Event
1682
+ Forecasting with Neural Networks at Uber. ICML 2017 Time Series Workshop.
1683
+ Sydney. Retrieved from http://roseyu.com/time-series-
1684
+ workshop/submissions/TSW2017_paper_3.pdf
1685
+ Reddi, S., Kale, S., & Kumar, S. (2018). On the Convergence of Adam and Beyond.
1686
+ International Conference on Learning Representations. Retrieved from
1687
+ https://openreview.net/forum?id=ryQu7f-RZ
1688
+ Rushing, H., Karl, A., & Wisnowski, J. (2014). Design and Analysis of Experiments by
1689
+ Douglas Montgomery: A Supplement for Using JMP. Cary: SAS Institute.
1690
+ Winters, P. (1960). Forecasting sales by exponentially weighted moving averages.
1691
+ Management Science, 6(1): 127-137.
1692
+ Zhu, L., & Laptev, N. (2017). Deep and Confident Prediction for Time Series at Uber.
1693
+ arXiv. Retrieved from https://arxiv.org/abs/1709.01907
1694
+
1695
+
1696
+
9tAzT4oBgHgl3EQfg_wg/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9FJT4oBgHgl3EQfry2f/content/2301.11610v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e9df32eda72e881717c517205ec68bc2b6fec8ae65e363422ec9a337c32a28a
3
+ size 570082
A9FJT4oBgHgl3EQfry2f/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db31380987429a4fbd1f77c42eb2d4a3096f7b3d034d81e59c4bc80639e17764
3
+ size 138955
AtFLT4oBgHgl3EQfxTCi/content/tmp_files/2301.12167v1.pdf.txt ADDED
@@ -0,0 +1,2577 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ e-print http://www.gm.fh-koeln.de/ciopwebpub/Konen22b.d/TR-Rubiks.pdf
2
+ Towards Learning Rubik’s Cube with
3
+ N-tuple-based Reinforcement Learning
4
+ Wolfgang Konen
5
+ Technical Report,
6
+ Computer Science Institute,
7
+ TH Köln,
8
+ University of Applied Sciences,
9
+ Germany
10
+ wolfgang.konen@th-koeln.de
11
+ Sep 2022,
12
+ last update Jan 2023
13
+ Abstract
14
+ This work describes in detail how to learn and solve the Rubik’s cube game (or
15
+ puzzle) in the General Board Game (GBG) learning and playing framework. We cover
16
+ the cube sizes 2x2x2 and 3x3x3. We describe in detail the cube’s state representation,
17
+ how to transform it with twists, whole-cube rotations and color transformations and
18
+ explain the use of symmetries in Rubik’s cube.
19
+ Next, we discuss different n-tuple
20
+ representations for the cube, how we train the agents by reinforcement learning and
21
+ how we improve the trained agents during evaluation by MCTS wrapping.
22
+ We present results for agents that learn Rubik’s cube from scratch, with and without
23
+ MCTS wrapping, with and without symmetries and show that both, MCTS wrapping
24
+ and symmetries, increase computational costs, but lead at the same time to much
25
+ better results. We can solve the 2x2x2 cube completely, and the 3x3x3 cube in the
26
+ majority of the cases for scrambled cubes up to p = 15 (QTM). We cannot yet reliably
27
+ solve 3x3x3 cubes with more than 15 scrambling twists.
28
+ Although our computational costs are higher with MCTS wrapping and with sym-
29
+ metries than without, they are still considerably lower than in the approaches of McAleer
30
+ et al. (2018, 2019) and Agostinelli et al. (2019) who provide the best Rubik’s cube
31
+ learning agents so far.
32
+ 1
33
+ arXiv:2301.12167v1 [cs.LG] 28 Jan 2023
34
+
35
+ Contents
36
+ 1
37
+ Introduction
38
+ 4
39
+ 1.1
40
+ Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
+ 4
42
+ 1.2
43
+ Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
+ 5
45
+ 2
46
+ Foundations
47
+ 6
48
+ 2.1
49
+ Conventions and Symbols
50
+ . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
+ 6
52
+ 2.1.1
53
+ Color arrangement
54
+ . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
+ 6
56
+ 2.1.2
57
+ Twist and Rotation Symbols . . . . . . . . . . . . . . . . . . . . . . .
58
+ 6
59
+ 2.1.3
60
+ Twist Types
61
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
+ 6
63
+ 2.2
64
+ Facts about Cubes
65
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
+ 7
67
+ 2.2.1
68
+ 2x2x2 Cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
+ 7
70
+ 2.2.2
71
+ 3x3x3 Cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
+ 7
73
+ 2.3
74
+ The Cube State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
+ 8
76
+ 2.4
77
+ Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
+ 9
79
+ 2.4.1
80
+ Twist Transformations . . . . . . . . . . . . . . . . . . . . . . . . . .
81
+ 9
82
+ 2.4.2
83
+ Whole-Cube Rotations (WCR) . . . . . . . . . . . . . . . . . . . . .
84
+ 11
85
+ 2.4.3
86
+ Color Transformations . . . . . . . . . . . . . . . . . . . . . . . . . .
87
+ 13
88
+ 2.5
89
+ Symmetries
90
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
+ 15
92
+ 3
93
+ N-Tuple Systems
94
+ 16
95
+ 4
96
+ N-Tuple Representions for the Cube
97
+ 18
98
+ 4.1
99
+ CUBESTATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
100
+ 18
101
+ 4.2
102
+ STICKER
103
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
+ 18
105
+ 4.3
106
+ STICKER2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
107
+ 20
108
+ 4.4
109
+ Adjacency Sets
110
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
111
+ 21
112
+ 5
113
+ Learning the Cube
114
+ 21
115
+ 5.1
116
+ McAleer and Agostinelli . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
117
+ 21
118
+ 5.2
119
+ N-Tuple-based TD Learning
120
+ . . . . . . . . . . . . . . . . . . . . . . . . . .
121
+ 24
122
+ 5.2.1
123
+ Temporal Coherence Learning (TCL) . . . . . . . . . . . . . . . . . .
124
+ 25
125
+ 5.2.2
126
+ MCTS
127
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
128
+ 25
129
+ 5.2.3
130
+ Method Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
+ 27
132
+ 6
133
+ Results
134
+ 27
135
+ 6.1
136
+ Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
137
+ 27
138
+ 6.2
139
+ Cube Solving with MCTS Wrapper, without Symmetries
140
+ . . . . . . . . . . .
141
+ 28
142
+ 6.3
143
+ Number of Symmetric States . . . . . . . . . . . . . . . . . . . . . . . . . .
144
+ 28
145
+ 6.4
146
+ The Benefit of Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
+ 29
148
+ 6.5
149
+ Computational Costs
150
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151
+ 30
152
+ 7
153
+ Related Work
154
+ 32
155
+ 2
156
+
157
+ 8
158
+ Summary and Outlook
159
+ 33
160
+ A Calculating sloc from fcol
161
+ 37
162
+ A.1 2x2x2 cube
163
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
164
+ 37
165
+ A.2 3x3x3 cube
166
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
167
+ 38
168
+ B N-Tuple Representations for the 3x3x3 Cube
169
+ 38
170
+ B.1
171
+ CUBESTATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
172
+ 38
173
+ B.2
174
+ STICKER
175
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
176
+ 39
177
+ B.3
178
+ STICKER2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
179
+ 39
180
+ B.4
181
+ Adjacency Sets
182
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
183
+ 40
184
+ C Hyperparameters
185
+ 40
186
+ 3
187
+
188
+ 1
189
+ Introduction
190
+ 1.1
191
+ Motivation
192
+ Game learning and game playing is an interesting test bed for strategic decision making.
193
+ Games usually have large state spaces, and they often require complex pattern recognition
194
+ and strategic planning capabilities to decide which move is the best in a certain situation.
195
+ If algorithms learn a game (or, even better, a variety of different games) just by self-play,
196
+ given no other knowledge than the game rules, it is likely that they perform also well on
197
+ other problems of strategic decision making.
198
+ In recent years, reinforcement learning (RL) and deep neural networks (DNN) achieved
199
+ superhuman capabilities in a number of competitive games (Mnih et al., 2015; Silver et al.,
200
+ 2016). This success has been a product of the combination of reinforcement learning,
201
+ deep learning and Monte Carlo Tree Search (MCTS). However, current deep reinforcement
202
+ learning (DRL) methods struggle in environments with a high number of states and a small
203
+ number of reward states.
204
+ (a)
205
+ (b)
206
+ Figure 1: (a) Scrambled 3x3x3 Rubik’s Cube. (b) 2x2x2 cube in the middle of a twist.
207
+ The Rubik’s cube puzzle is an example of such an environment since the classical
208
+ 3x3x3 cube has 4.3 · 1019 states and only one state (the solved cube) has a reward. A
209
+ somewhat simpler puzzle is the 2x2x2 cube with 3.6 · 106 state and again only one reward
210
+ state. Both cubes are shown in Fig. 1.
211
+ The difficult task to learn from scratch how to solve arbitrary scrambled cubes (i.e.
212
+ without being taught by expert knowledge, whether from humans or from computerized
213
+ solvers) was not achievable with DRL methods for a long time. Recently, the works of
214
+ McAleer et al. (2018, 2019) and Agostinelli et al. (2019) provided a breakthrough in that
215
+ direction (see Sec. 5.1 and 7 for details): Their approach DAVI (Deep Approximate Value
216
+ Iteration) learned from scratch to solve arbitrary scrambled 3x3x3 cubes.
217
+ This work investigates whether TD-n-tuple learning with much lower computational de-
218
+ mands can solve (or partially solve) Rubik’s cube as well.
219
+ 4
220
+
221
+ 1.2
222
+ Overview
223
+ The General Board Game (GBG) learning and playing framework (Konen, 2019; Konen and
224
+ Bagheri, 2020; Konen, 2022) was developed for education and research in AI. GBG allows
225
+ applying the new algorithm easily to a variety of games. GBG is open source and available
226
+ on GitHub1. The main contribution of this paper is to take the TD-n-tuple approach from
227
+ GBG (Scheiermann and Konen, 2022) that was also successful on other games (Othello,
228
+ ConnectFour) and to investigate this algorithm on various cube puzzles. We will show that
229
+ it can solve the 2x2x2 cube perfectly and the 3x3x3 cube partly. At the same time it has
230
+ drastically reduced computational requirements compared to McAleer et al. (2019). We
231
+ will show that wrapping the base agent with an MCTS wrapper, as it was done by McAleer
232
+ et al. (2019) and Scheiermann and Konen (2022), is essential to reach this success.
233
+ This work is at the same time an in-depth tutorial how to represent a cube and its
234
+ transformations within a computer program such that all types of cube operations can be
235
+ computed efficiently. As another important contribution we will show how symmetries
236
+ (Sec. 2.5, 6.3 and 6.4) applied to cube puzzles can greatly increase sample efficiency and
237
+ performance.
238
+ The rest of this paper is organized as follows: Sec. 2 lays the foundation for Rubik’s
239
+ cube, its state representation, its transformations and its symmetries. In Sec. 3 we in-
240
+ troduce n-tuple systems and how they can be used to derive policies for game-playing
241
+ agents. Sec. 4 defines and discusses several n-tuple representations for the cube. Sec. 5
242
+ presents algorithms for learning the cube: first the DAVI algorithm of McAleer et al. (2019);
243
+ Agostinelli et al. (2019) and then our n-tuple-based TD learning (with extensions TCL and
244
+ MCTS). In Sec. 6 we present the results when applying our n-tuple-based TD learning
245
+ method to the 2x2x2 and the 3x3x3 cube. Sec. 7 discusses related work and Sec. 8 con-
246
+ cludes.
247
+ 1https://github.com/WolfgangKonen/GBG
248
+ 5
249
+
250
+ 2
251
+ Foundations
252
+ 2.1
253
+ Conventions and Symbols
254
+ We consider in this paper two well-known cube types, namely the 2x2x2 cube (pocket
255
+ cube) and the 3x3x3 cube (Rubik’s cube).
256
+ 2.1.1
257
+ Color arrangement
258
+ Each cube consists of smaller cubies: 8 corner cubies for the 2x2x2 cube and 8 corner,
259
+ 12 edge and 6 center cubies for the 3x3x3 cube. A corner cubie has 3 stickers of different
260
+ color on its 3 faces. An edge cubie has two, a center cubie has one sticker.
261
+ We enumerate the 6 cube faces with
262
+ (ULF) = (Up, Left, Front) and
263
+ (DRB) = (Down, Right, Back).
264
+ We number the 6 colors with 0,1,2,3,4,5. My cube has these six colors
265
+ 012 = wbo = (white,blue,orange) in the (ULF)-cubie2 and
266
+ 345 = ygr = (yellow,green,red) in the opposing (DRB)-cubie.
267
+ The solved cube in default position has colors (012345) for the faces (ULFDRB), i.e. the
268
+ white color is at the Up face, blue at Left, orange as Front and so on. We can cut the cube
269
+ such that up- and bottom-face can be folded away and have a flattened representation as
270
+ shown in Figure 2.
271
+ w
272
+ b
273
+ o
274
+ g
275
+ r
276
+ y
277
+ Figure 2: The face colors of the default cube in flattened representation
278
+ 2.1.2
279
+ Twist and Rotation Symbols
280
+ Twists of cube faces are denoted by uppercase letters U, L, F, D, R, B. Each of these twists
281
+ means a 90◦ counterclockwise rotation.3 If U = U1 is a 90◦ rotation, then U2 is a 180◦
282
+ rotation and U3=U−1 is a 270◦ rotation.
283
+ Whole-cube rotations are denoted by lowercase letters u, ℓ, f. (We do not need d, r, b
284
+ here, because d = u−1, r = ℓ−1 and so on.)
285
+ Further symbols like fc[i], sℓ[i] that characterize a cube state will be explained in Sec. 2.3.
286
+ 2.1.3
287
+ Twist Types
288
+ Cube puzzles can have different twist types or twist metrics:
289
+ 2We run through the faces of a cubie in counter-clockwise orientation.
290
+ 3The rotation is counterclockwise when looking at the respective face
291
+ 6
292
+
293
+ • QTM (quarter turn metric): only quarter twists are allowed: e.g. U1 and U−1.
294
+ • HTM (half turn metric): quarter and half turns (twists) are allowed: e.g. U1, U2, U3.
295
+ By allowed we mean what counts as one move. In QTM we can realize U2 via U U as
296
+ well, but it costs us 2 moves. In HTM, U2 counts as one move.
297
+ The twist type influences God’s number and the branching factor of the game, see
298
+ Sec. 2.2.
299
+ 2.2
300
+ Facts about Cubes
301
+ 2.2.1
302
+ 2x2x2 Cube
303
+ The number of distinct states for the 2x2x2 pocket cube is (Wikipedia, 2022a)
304
+ 8! · 37
305
+ 24
306
+ = 7! · 36 = 3, 674, 160 ≈ 3.6 · 106
307
+ (1)
308
+ Why this formula? — We have 8 cubies which we can place in 8! ways on the 8 cube
309
+ positions. Each but the last cubie has the freedom to appear in 3 orientations, which gives
310
+ the factor 37 (the last cubie is then in a fixed orientation, the other two orientations would
311
+ yield illegal cube states). – Each of these raw states has the (ygr)-cubie in any of the
312
+ 24 possible positions. Or, otherwise speaking, each truly different state appears in 24
313
+ whole-cube rotations. To factor out the whole-cube rotations, we count only the states with
314
+ (ygr)-cubie in its default position (DRB) and divide the number of raw states by 24, q.e.d.
315
+ God’s number: What is the minimal number of moves needed to solve any cube posi-
316
+ tion? – For the 2x2x2 pocket cube, it is 11 in HTM (half-turn metric) and 14 in QTM.
317
+ Branching factor: 3 · 3 = 9 in HTM and 3 · 2 = 6 in QTM.
318
+ 2.2.2
319
+ 3x3x3 Cube
320
+ The number of distinct states for the 3x3x3 Cube is (Wikipedia, 2022b)
321
+ 8! · 37 · 12! · 211
322
+ 2
323
+ = 43, 252, 003, 274, 489, 856, 000 ≈ 4.3 · 1019
324
+ (2)
325
+ Why this formula? – We have 8 corner cubies which we can place in 8! ways on the 8
326
+ cube positions. Each but the last cubie has the freedom to appear in 3 orientations, which
327
+ gives the factor 37. We have 12 edge cubies which we can place in 12! ways on the edge
328
+ positions. Each but the last cubie has the freedom to appear in 2 orientations, which gives
329
+ the factor 211. The division by 2 stems from the fact, that neither alone two corner cubies
330
+ may be swapped nor alone two edge cubies may be swapped. Instead, the number of such
331
+ swaps must be even (factor 2).
332
+ God’s Number: What is the minimal number of moves needed to solve any cube
333
+ position? — For the 3x3x3 Rubik’s Cube, it is 20 in HTM (half-turn metric) and 26 in QTM.
334
+ This is a result from Rokicki et al. (2014), see also http://www.cube20.org/qtm/.
335
+ Branching factor: 6 · 3 = 18 in HTM and 6 · 2 = 12 in QTM.
336
+ 7
337
+
338
+ 3
339
+ 2
340
+ 0
341
+ 1
342
+ 5
343
+ 4
344
+ 8
345
+ 11
346
+ 18
347
+ 17
348
+ 23
349
+ 22
350
+ 6
351
+ 7
352
+ 9
353
+ 10
354
+ 19
355
+ 16
356
+ 20
357
+ 21
358
+ 14
359
+ 13
360
+ 15
361
+ 12
362
+ Figure 3: Sticker numbering for the 2x2x2 cube
363
+ 2.3
364
+ The Cube State
365
+ A cube should be represented by objects in GBG in such a way that
366
+ (a) cube states that are equivalent are represented by identical objects
367
+ (b) if two cube states are equivalent, it should be easy to check this by comparing their
368
+ objects
369
+ (c) cube transformations are easy to carry out on these objects.
370
+ Condition (a) means that if two twist sequences lead to the same cube state (e.g. U−1
371
+ and UUU), this should result also in identical objects. Condition (b) means, that the equality
372
+ should be easy to check, given the objects. That is, a cube should not be represented by
373
+ its twist sequence.
374
+ A cube state is in GBG represented by abstract class CubeState and has two describ-
375
+ ing members
376
+ fc[i]
377
+ =
378
+ fcol[i]
379
+ (3)
380
+ sℓ[i]
381
+ =
382
+ sloc[i]
383
+ (4)
384
+ fc[i] = fcol[i] denotes the face color at sticker location i. The color is one out of
385
+ 0,1,2,3,4,5 for the colors w,b,o,y,g,r.
386
+ sℓ[i] = sloc[i] contains the sticker location of the sticker which is in position i for the
387
+ solved cube d.
388
+ Members fc and sℓ are vectors with 24 (2x2x2 cube) or 48 (3x3x3 cube) elements
389
+ where i denotes the ith sticker location.
390
+ The stickers are numbered in a certain way which is detailed in Figures 3 and 4 for the
391
+ flattened representations of the 2x2x2 and 3x3x3 cube, resp.
392
+ In principle, one of the two members fc and sℓ would be sufficient to characterize a
393
+ state, since the fcol-sloc-relation
394
+ fc[sℓ[i]] = d.fc[i]
395
+ (5)
396
+ holds, where d denotes the default cube. This is because sℓ[i] transports the sticker i
397
+ of the default cube d to location sℓ[i], i.e. it has the color d.fc[i]. That is, we can easily
398
+ 8
399
+
400
+ 6
401
+ 5
402
+ 4
403
+ 7
404
+ 3
405
+ 0
406
+ 1
407
+ 2
408
+ 10
409
+ 9
410
+ 8
411
+ 16
412
+ 23
413
+ 22
414
+ 36
415
+ 35
416
+ 34
417
+ 46
418
+ 45
419
+ 44
420
+ 11
421
+ 15
422
+ 17
423
+ 21
424
+ 37
425
+ 33
426
+ 47
427
+ 43
428
+ 12
429
+ 13
430
+ 14
431
+ 18
432
+ 19
433
+ 20
434
+ 38
435
+ 39
436
+ 32
437
+ 40
438
+ 41
439
+ 42
440
+ 28
441
+ 27
442
+ 26
443
+ 29
444
+ 25
445
+ 30
446
+ 31
447
+ 24
448
+ Figure 4: Sticker numbering for the 3x3x3 cube. We do not number the center cubies, they
449
+ stay invariant under twists.
450
+ Table 1: The three relevant twists for the 2x2x2 cube
451
+ 0
452
+ 1
453
+ 2
454
+ 3
455
+ 4
456
+ 5
457
+ 6
458
+ 7
459
+ 8
460
+ 9
461
+ 10 11
462
+ 12 13 14 15
463
+ 16 17 18 19
464
+ 20 21 22 23
465
+ U twist
466
+ T
467
+ 1
468
+ 2
469
+ 3
470
+ 0
471
+ 11
472
+ 8
473
+ 6
474
+ 7
475
+ 18
476
+ 9
477
+ 10 17
478
+ 12 13 14 15
479
+ 16 22 23 19
480
+ 20 21
481
+ 4
482
+ 5
483
+ L twist
484
+ T
485
+ 22
486
+ 1
487
+ 2 21
488
+ 5
489
+ 6
490
+ 7
491
+ 4
492
+ 3
493
+ 0
494
+ 10 11
495
+ 12 13
496
+ 8
497
+ 9
498
+ 16 17 18 19
499
+ 20 14 15 23
500
+ F twist
501
+ T
502
+ 7
503
+ 4
504
+ 2
505
+ 3
506
+ 14
507
+ 5
508
+ 6 13
509
+ 9
510
+ 10 11
511
+ 8
512
+ 12 18 19 15
513
+ 16 17
514
+ 0
515
+ 1
516
+ 20 21 22 23
517
+ U−1
518
+ T −1
519
+ 3
520
+ 0
521
+ 1
522
+ 2
523
+ 22 23 6
524
+ 7
525
+ 5
526
+ 9
527
+ 10
528
+ 4
529
+ 12 13 14 15
530
+ 16 11
531
+ 8
532
+ 19
533
+ 20 21 17 18
534
+ L−1
535
+ T −1
536
+ 9
537
+ 1
538
+ 2
539
+ 8
540
+ 7
541
+ 4
542
+ 5
543
+ 6
544
+ 14 15 10 11
545
+ 12 13 21 22
546
+ 16 17 18 19
547
+ 20
548
+ 3
549
+ 0
550
+ 23
551
+ F−1
552
+ T −1
553
+ 18 19 2
554
+ 3
555
+ 1
556
+ 5
557
+ 6
558
+ 0
559
+ 11
560
+ 8
561
+ 9
562
+ 10
563
+ 12
564
+ 7
565
+ 4
566
+ 15
567
+ 16 17 13 14
568
+ 20 21 22 23
569
+ calculate fc given sℓ. With some more effort, it is also possible to calculate sℓ given fc (see
570
+ Appendix A). Although one of these members fc and sℓ would be sufficient, we keep both
571
+ because this allows to better perform assertions or cross checks during transformations.
572
+ Sometime we need the inverse function s−1
573
+ ℓ [i]: Which sticker is at location i? It is easy
574
+ to calculate s−1
575
+
576
+ given sℓ with the help of the relation:
577
+ s−1
578
+ ℓ [sℓ[i]] = i
579
+ (6)
580
+ (Note that it is not possible to invert fc, because the face coloring function is not bijective.)
581
+ 2.4
582
+ Transformations
583
+ 2.4.1
584
+ Twist Transformations
585
+ Each basic twist is a counterclockwise4 rotation of a face by 90◦. Table 1 shows the 2x2x2
586
+ transformation functions for three basic twists. Each twist transformation can be coded in
587
+ two forms:
588
+ 1. T[i] (forward transformation): Which is the new location for the sticker being at i
589
+ before the twist?
590
+ 4The rotation is counterclockwise when looking at this face.
591
+ 9
592
+
593
+ 2
594
+ 1
595
+ 3
596
+ 0
597
+ 23
598
+ 22
599
+ 5
600
+ 4
601
+ 8
602
+ 11
603
+ 18
604
+ 17
605
+ 6
606
+ 7
607
+ 9
608
+ 10
609
+ 19
610
+ 16
611
+ 20
612
+ 21
613
+ 14
614
+ 13
615
+ 15
616
+ 12
617
+ Figure 5: The default 2x2x2 cube after twist U1
618
+ 2. T −1[i] (inverse transformation): Which is the (parent) location of the sticker that lands
619
+ in i after the twist?
620
+ Example (read off from column 0 of Table 1): The L-twist transports sticker at 0 to 22:
621
+ T[0] = 22. The (parent) sticker being at location 9 before the L-twist comes to location 0
622
+ after the twist: T −1[0] = 9. Likewise, for the U-twist we have T[0] = 1 and T −1[0] = 3. We
623
+ show in Fig. 5 the default cube after twist U1.
624
+ How can we apply a twist transformation to a cube state programmatically? – We
625
+ denote with f′
626
+ c and s′
627
+ ℓ the new states for fc and sℓ after transformation. The following
628
+ relations allow to calculate the transformed cube state:
629
+ f′
630
+ c[i]
631
+ =
632
+ fc[T −1[i]]
633
+ (7)
634
+ s′
635
+ ℓ[s−1
636
+ ℓ [i]]
637
+ =
638
+ T[i]
639
+ (8)
640
+ Eq. (7) says: The new color for sticker 0 is the color of the sticker which moves into
641
+ location 0 (fc[9] in the case of an L-twist). To explain Eq. (8), we first note that s−1
642
+ ℓ [i] is the
643
+ sticker being at i before the transformation. Then, Eq. (8) says: „The new location for the
644
+ sticker being at i before the transformation is T[i].“ For example, the L-twist transports the
645
+ current sticker at location 0 to the new location T[0] = 22, i. e. s′
646
+ ℓ[0] = 22.
647
+ For the 2x2x2 cube, these 3 twists U, L, F are sufficient, because D=U−1, R=L−1,
648
+ B=F−1. This is because the 2x2x2 cube has no center cubies. For the 3x3x3 cube, we
649
+ need all 6 twists U, L, F, D, R, B because this cube has center cubies.
650
+ In any case, we will show in Sec. 2.4.2 that only one row in Table 1 or Table 2, say T
651
+ for the U-twist, has to be known or established ’by hand’. All other twists and their inverses
652
+ can be calculated programmatically with the help of Eqs. (9)-(15) that will be derived in
653
+ Sec. 2.4.2.
654
+ Table 2: The U twist for the 3x3x3 cube
655
+ 0
656
+ 1
657
+ 2
658
+ 3
659
+ 4
660
+ 5
661
+ 6
662
+ 7
663
+ 8
664
+ 9
665
+ 10 11
666
+ 12 13 14 15
667
+ 16 17 18 19
668
+ 20 21 22 23
669
+ U twist T
670
+ 2
671
+ 3
672
+ 4
673
+ 5
674
+ 6
675
+ 7
676
+ 0
677
+ 1
678
+ 22 23 16 11
679
+ 12 13 14 15
680
+ 36 17 18 19
681
+ 20 21 34 35
682
+ 24 25 26 27
683
+ 28 29 30 31
684
+ 32 33 34 35
685
+ 36 37 38 39
686
+ 40 41 42 43
687
+ 44 45 46 47
688
+ U twist T
689
+ 24 25 26 27
690
+ 28 29 30 31
691
+ 32 33 44 45
692
+ 46 37 38 39
693
+ 40 41 42 43
694
+ 8
695
+ 9
696
+ 10 47
697
+ 10
698
+
699
+ Table 3: Two basic whole-cube rotations for the 2x2x2 cube
700
+ 0
701
+ 1
702
+ 2
703
+ 3
704
+ 4
705
+ 5
706
+ 6
707
+ 7
708
+ 8
709
+ 9
710
+ 10 11
711
+ 12 13 14 15
712
+ 16 17 18 19
713
+ 20 21 22 23
714
+ u rotation
715
+ T
716
+ 1
717
+ 2
718
+ 3
719
+ 0
720
+ 11
721
+ 8
722
+ 9
723
+ 10
724
+ 18 19 16 17
725
+ 15 12 13 14
726
+ 21 22 23 20
727
+ 6
728
+ 7
729
+ 4
730
+ 5
731
+ f rotation
732
+ T
733
+ 7
734
+ 4
735
+ 5
736
+ 6
737
+ 14 15 12 13
738
+ 9
739
+ 10 11
740
+ 8
741
+ 17 18 19 16
742
+ 2
743
+ 3
744
+ 0
745
+ 1
746
+ 23 20 21 22
747
+ u−1
748
+ T −1
749
+ 3
750
+ 0
751
+ 1
752
+ 2
753
+ 22 23 20 21
754
+ 5
755
+ 6
756
+ 7
757
+ 4
758
+ 13 14 15 12
759
+ 10 11
760
+ 8
761
+ 19
762
+ 19 16 17 18
763
+ f−1
764
+ T −1
765
+ 18 19 16 17
766
+ 1
767
+ 2
768
+ 3
769
+ 0
770
+ 11
771
+ 8
772
+ 9
773
+ 10
774
+ 6
775
+ 7
776
+ 4
777
+ 5
778
+ 15 12 13 14
779
+ 21 22 23 20
780
+ Normalizing the 2x2x2 Cube
781
+ As stated above, the 3 twists U, L, F are sufficient
782
+ for the 2x2x2 cube. Therefore, the (DRB)-cubie will never leave its place, whatever the
783
+ twist sequence formed by U, L, F is. The (DRB)-cubie has the stickers (12, 16, 20), and we
784
+ can check in Table 1 that columns (12, 16, 20) are always invariant. If we have an arbitrary
785
+ initial 2x2x2 cube state, we can normalize it by applying a whole-cube rotation such that
786
+ the (ygr)-cubie moves to the (DRB)-location.
787
+ Normalizing the 3x3x3 Cube
788
+ In the case of the 3x3x3 cube, all center cubies
789
+ will be not affected by any twist sequence. Therefore, we normalize a 3x3x3 cube state
790
+ by applying initially a whole-cube rotation such that the center cubies are in their normal
791
+ position (i.e. white up, blue left and so on).
792
+ 2.4.2
793
+ Whole-Cube Rotations (WCR)
794
+ Each basic whole-cube rotation (WCR) is a counterclockwise rotation of the whole cube
795
+ around the u, l, f-axis by 90◦. Table 3 shows two of the 2x2x2 transformation functions for
796
+ basic whole-cube rotations. Each rotation can be coded in two forms:
797
+ 1. T[i] (forward transformation): Which is the new location for the sticker being at i
798
+ before the twist?
799
+ 2. T −1[i] (inverse transformation): Which is the (parent) location of the sticker that lands
800
+ in i after the twist?
801
+ Besides the basic rotation u there is also u2 (180◦) and u3 = u−1 (270◦ = −90◦).
802
+ All whole-cube rotations can be generated from these two forward rotations u and f:
803
+ First, we calculate the inverse transformations via
804
+ T −1[T[i]] = i
805
+ (9)
806
+ where T is a placeholder for u or f. Next, we calculate the missing base rotation ℓ (counter-
807
+ clockwise around the left face) as
808
+ ℓ = fuf−1
809
+ (10)
810
+ We use here the programm-code-oriented notation „first trafo first“: Eq. (10) reads as
811
+ „first f, then u, then f−1“.5
812
+ 5In programm code the relation would read cs.fTr(1).uTr().fTr(3). This is „first trafo first“, because
813
+ each transformation is applied to the cube state object to the left and returns the transformed cube state object.
814
+ 11
815
+
816
+ Table 4: All 24 whole-cube rotations (in first-trafo-first notation)
817
+ number
818
+ first rotation
819
+ ∗ u0
820
+ ∗ u1
821
+ ∗ u2
822
+ ∗ u3
823
+ 00-03
824
+ id (white up)
825
+ id
826
+ u
827
+ u2
828
+ u3
829
+ 04-07
830
+ f (green up)
831
+ f
832
+ fu
833
+ fu2
834
+ fu3
835
+ 08-11
836
+ f2 (yellow up)
837
+ f2
838
+ f2u
839
+ f2u2
840
+ f2u3
841
+ 12-15
842
+ f−1 (blue up)
843
+ f−1
844
+ f−1u
845
+ f−1u2
846
+ f−1u3
847
+ 16-19
848
+ ℓ (orange up)
849
+
850
+ ℓu
851
+ ℓu2
852
+ ℓu3
853
+ 20-23
854
+ ℓ−1 (red up)
855
+ ℓ−1
856
+ ℓ−1u
857
+ ℓ−1u2
858
+ ℓ−1u3
859
+ Table 5: The 24 inverse whole-cube rotations (in first-trafo-first notation)
860
+ number
861
+ first rotation
862
+ ∗ u0
863
+ ∗ u1
864
+ ∗ u2
865
+ ∗ u3
866
+ 00-03
867
+ id (white up)
868
+ id
869
+ u3
870
+ u2
871
+ u1
872
+ 04-07
873
+ f (green up)
874
+ f−1
875
+ ℓu3
876
+ fu2
877
+ ℓ−1u
878
+ 08-11
879
+ f2 (yellow up)
880
+ f2
881
+ f2u
882
+ f2u2
883
+ f2u3
884
+ 12-15
885
+ f−1 (blue up)
886
+ f
887
+ ℓ−1u3
888
+ f−1u2
889
+ ℓu
890
+ 16-19
891
+ ℓ (orange up)
892
+ ℓ−1
893
+ f−1u3
894
+ ℓu2
895
+ fu
896
+ 20-23
897
+ ℓ−1 (red up)
898
+
899
+ fu−1
900
+ ℓ−1u2
901
+ f−1u
902
+ The other basic whole-cube rotations d, r, b are not needed, because d = u−1, r = ℓ−1
903
+ and b = f−1.
904
+ The basic whole-cube rotations are rotations of the whole cube around just one axis.
905
+ But there are also composite whole-cube rotations which consists of a sequence of basic
906
+ rotations.
907
+ How many different (composite) rotations are there for the cube? – A little thought
908
+ reveals that there are 24 of them: To be specific, we consider the default cube where we
909
+ have 4 rotations with the white face up, 4 with the blue face up, and so on. In total we have
910
+ 6 · 4 = 24 rotations since there are 6 faces. Table 4 lists all of them, togehter with the WCR
911
+ numbering convention used in GBG.
912
+ Sometimes we need the inverse whole-cube rotations which are given in Table 5. In
913
+ this table, we read for example from the element with number 5, that the WCR with key 5
914
+ (which is fu according to Table 4) has the inverse WCR ℓu3 such that
915
+ fu ℓu3 = id
916
+ holds.
917
+ For convenience, we list in Table 6 the <Key, InverseKey> relation. For example, the
918
+ trafo with Key=5 (fu) has the inverse trafo with InverseKey=19 (ℓu3). Note that there are
919
+ 10 whole-cube rotations which are their own inverse.
920
+ Generating all twists from U twist
921
+ With the help of WCRs we can generate the other
922
+ twists from the U twist only: We simply rotate the face that we want to twist to the up-face,
923
+ 12
924
+
925
+ Table 6: Whole-cube rotations: <Key, InverseKey> relation
926
+ key
927
+ 0 1 2 3
928
+ 4
929
+ 5
930
+ 6
931
+ 7
932
+ 8 9 10 11
933
+ 12 13 14 15
934
+ 16 17 18 19
935
+ 20 21 22 23
936
+ inv key
937
+ 0 3 2 1
938
+ 12 19 6 21
939
+ 8 9 10 11
940
+ 4
941
+ 23 14 17
942
+ 20 15 18 05
943
+ 16
944
+ 7
945
+ 22 13
946
+ apply the U twist and rotate back. This reads in first-trafo-first notation:
947
+ L = f−1Uf
948
+ (11)
949
+ F = ℓUℓ−1
950
+ (12)
951
+ D = f2Uf2
952
+ (13)
953
+ R = fUf−1
954
+ (14)
955
+ B = ℓ−1Uℓ
956
+ (15)
957
+ Thus, given the U twist from Table 1 or Table 2 and the basic WCRs given in Table 3 and
958
+ Eq. (10), we can calculate all other forward transformations with the help of Eqs. (11)–(15).
959
+ Then, all inverse transformations are calculable with the help of Eq. (9).
960
+ 2.4.3
961
+ Color Transformations
962
+ Color transformations are special transformations that allow to discover non-trivial symmet-
963
+ ric (equivalent) states.
964
+ One way to describe a color transformation is to select a valid color permutation and to
965
+ paint each sticker with the new color according to this color permutation. This is of course
966
+ nothing one can do with a real cube without destroying or altering it, but it is a theoretical
967
+ concept leading to an equivalent state.
968
+ Another way of looking at it is to record the twist sequence that leads from the default
969
+ cube to a certain scrambled cube state. Then we go back to the default cube, make at
970
+ first a whole-cube rotation (leading to a color-transformed default cube) and then apply the
971
+ recorded twist sequence to the color-transformed default cube.
972
+ In any case, the transformed cube will be usually not in its normal position, so we apply
973
+ finally a normalizing operation to it.
974
+ What are valid color permutations? – These are permutations of the cube colors reach-
975
+ able when applying one of the available 24 WCRs (Table 4) to the default cube. For exam-
976
+ ple, if we apply WCR f (number 04) to the default cube, we get
977
+ g
978
+ w
979
+ o
980
+ y
981
+ r
982
+ b
983
+ Figure 6: The color transformation according to WCR f (number 04)
984
+ that is, g (green) is the new color for each up-sticker that was w (white) before and so
985
+ on. The colors o and r remain untouched under this color permutation. [However, other
986
+ transformations like fu, fu2 and fu3 will change every color.]
987
+ 13
988
+
989
+ 2
990
+ 1
991
+ 3
992
+ 0
993
+ 23
994
+ 22
995
+ 5
996
+ 4
997
+ 8
998
+ 11
999
+ 18
1000
+ 17
1001
+ 6
1002
+ 7
1003
+ 9
1004
+ 10
1005
+ 19
1006
+ 16
1007
+ 20
1008
+ 21
1009
+ 14
1010
+ 13
1011
+ 15
1012
+ 12
1013
+ Figure 7: The cube of Fig. 5 before color transformation.
1014
+ 16
1015
+ 19
1016
+ 17
1017
+ 18
1018
+ 20
1019
+ 23
1020
+ 2
1021
+ 1
1022
+ 11
1023
+ 10
1024
+ 13
1025
+ 12
1026
+ 3
1027
+ 0
1028
+ 8
1029
+ 9
1030
+ 14
1031
+ 15
1032
+ 21
1033
+ 22
1034
+ 4
1035
+ 7
1036
+ 5
1037
+ 6
1038
+ 8
1039
+ 2
1040
+ 9
1041
+ 1
1042
+ 4
1043
+ 7
1044
+ 14
1045
+ 11
1046
+ 18
1047
+ 17
1048
+ 23
1049
+ 0
1050
+ 5
1051
+ 6
1052
+ 15
1053
+ 10
1054
+ 19
1055
+ 16
1056
+ 20
1057
+ 3
1058
+ 21
1059
+ 13
1060
+ 22
1061
+ 12
1062
+ (a)
1063
+ (b)
1064
+ Figure 8: The cube of Fig. 7 with color transformation from Fig 6: (a) before normalization,
1065
+ (b) after normalization.
1066
+ How can we apply a color transformation to a cube state programmatically? – We
1067
+ denote with f′ and s′
1068
+ ℓ the new states for f and sℓ after transformation.
1069
+ The following
1070
+ relations allow to calculate the transformed cube state:
1071
+ f′
1072
+ c[i]
1073
+ =
1074
+ c[fc[i]]
1075
+ (16)
1076
+ s′
1077
+ ℓ[s−1
1078
+ ℓ [i]]
1079
+ =
1080
+ T[i]
1081
+ (17)
1082
+ where c[] is the 6-element color trafo vector (holding the new colors for current colors 0:w,
1083
+ 1:b, ..., 5:r) and T is the 24- or 48-element vector of the WCR that produces this color
1084
+ transformation. Eq. (16) is simple: If a certain sticker has color 0 (w, white) before the color
1085
+ transformation, then it will get the new color c[0], e.g. 4 (g, green), after the transformation.
1086
+ Eq. (17) looks complicated, but it has a similar meaning as in the twist trafo: Take i = 0 as
1087
+ example: The new place for the sticker being at 0 before the trafo (and coming from s−1
1088
+ ℓ [0])
1089
+ is T[0]. Therefore, we write the number T[0] into s′
1090
+ ℓ[s−1
1091
+ ℓ [0]].
1092
+ A color transformation example is shown in Figs. 7 and 8. Fig. 7 is just a replication
1093
+ of Fig. 5 showing a default cube after U1 twist. The color transformation number 04 applied
1094
+ to the cube of Fig. 7 is shown in Fig. 8 (a)-(b) in two steps:
1095
+ (a) The stickers are re-painted and re-numbered (white becomes green, blue becomes
1096
+ white and so on). The structure of coloring is the same as in Fig. 7. Now the (DRB)-
1097
+ cubie is no longer the (ygr)-cubie, it does not carry the numbers (12,16,20).
1098
+ 14
1099
+
1100
+ (b) We apply the proper WCR that brings the (ygr)-cubie back to the (DRB)-location.
1101
+ Compared to (a), each 4-sticker cube face is just rotated to another face, but not
1102
+ changed internally. We can check that the (DRB)-location now carries again the num-
1103
+ bers (12,16,20), as in Fig. 7 and as it should for a normalized cube.
1104
+ 2.5
1105
+ Symmetries
1106
+ Symmetries are transformations of the game state (and the attached action, if applicable)
1107
+ that lead to equivalent states.
1108
+ That is, if s is a certain state with value V (s), then all
1109
+ states ssym being symmetric to s have the same value V (ssym) = V (s) because they are
1110
+ equivalent. Equivalent means: If s can be solved by a twist sequence of length n, then
1111
+ ssym can be solved by an equivalent twist sequence of same length n.
1112
+ In the case of Rubik’s cube, all whole-cube rotations (WCRs) are symmetries because
1113
+ they do not change the value of a state. But whole-cube rotations are ’trivial’ symmetries
1114
+ because they are usually factored out by the normalization of the cube: After 2x2x2 cube
1115
+ normalization, which brings the (ygr)-cubie in a certain position, or after 3x3x3 cube nor-
1116
+ malization, which brings the center cubies in certain faces, all WCR-symmetric states are
1117
+ transformed to the same state.
1118
+ Non-trivial symmetries are all color transformations (Sec. 2.4.3): In general, color trans-
1119
+ formations transform a state s to a truly different state ssym, even after cube normalization.6
1120
+ Since there are 24 color transformations in Rubik’s cube, there are also 24 non-trivial sym-
1121
+ metries (including self).
1122
+ Symmetries are useful to learn to solve Rubik’s cube for two reasons: (a) to accelerate
1123
+ learning and (b) to smooth an otherwise noisy value function.
1124
+ (a) Accelerated learning: If a state s (or state-action pair) is observed, not only the
1125
+ weights activated by that state are updated, but also the weights of all symmetric states
1126
+ ssym, because they have the same V (ssym) = V (s) and thus the same reward. In this
1127
+ way, a single observed sample is connected with more weight updates (better sample
1128
+ efficiency).
1129
+ (b) Smoothed value function: By this we mean that the value function V (s) is replaced
1130
+ by
1131
+ V (sym)(s) =
1132
+ 1
1133
+ |Fs|
1134
+
1135
+ s′∈Fs
1136
+ V (s′)
1137
+ (18)
1138
+ where Fs is the set of states being symmetric to s. If V (s) were the ideal value function,
1139
+ both terms V (s) and V (sym)(s) would be the same.7 But in a real n-tuple network, V (s)
1140
+ is non-ideal due to n-tuple-noise (cross-talk from other states that activate the same
1141
+ n-tuple LUT entries). If we average over the symmetric states s′ ∈ Fs, the noise will be
1142
+ dampened.
1143
+ 6In rare cases – e.g. for the solved cube – the transformed state may be identical to s or to another
1144
+ symmetry state, but this happens seldom for sufficiently scrambled cubes, see Sec. 6.3.
1145
+ 7because all V (s′) in Eq. (18) are the same for an ideal V
1146
+ 15
1147
+
1148
+ The downside of symmetries is their computational cost: In the case of Rubik’s cube,
1149
+ the calculation of color transformations is a costly operation. On the other hand, the number
1150
+ of necessary training episodes to reach a certain performance may be reduced. In the
1151
+ end, the use of symmetries may pay off, because the total training time may be reduced as
1152
+ well. In any case, we will have a better sample efficiency, since we learn more from each
1153
+ observed state or state-action pair. Secondly, the smoothing effect introduced with Eq. (18)
1154
+ can lead to better overall performance, because the smoothed value function provides a
1155
+ better guidance on the path towards the solved cube.
1156
+ In order to balance computation time, GBG offers the option to select with nSym the
1157
+ number of symmetries actually used. If we specify for example nSym = 8 in GBG’s Rubik’s
1158
+ cube implementation, then the state itself and 8 – 1 = 7 random other (non-id) color trans-
1159
+ formations will be selected. The resulting set Fs of 8 states is then used for weight update
1160
+ and value function computation.
1161
+ 3
1162
+ N-Tuple Systems
1163
+ N-tuple systems coupled with TD were first applied to game learning by Lucas (2008), al-
1164
+ though n-tuples were already introduced by Bledsoe and Browning (1959) for character
1165
+ recognition purposes. The remarkable success of n-tuples in learning to play Othello (Lu-
1166
+ cas, 2008) motivated other authors to benefit from this approach for a number of other
1167
+ games.
1168
+ The main goal of n-tuple systems is to map a highly non-linear function in a low di-
1169
+ mensional space to a high dimensional space where it is easier to separate ‘good’ and
1170
+ ‘bad’ regions. This can be compared to the kernel trick of support-vector machines. An
1171
+ n-tuple is defined as a sequence of n cells of the board. Each cell can have m positional
1172
+ values representing the possible states of that cell.8 Therefore, every n-tuple will have a
1173
+ (possibly large) look-up table indexed in form of an n-digit number in base m. Each entry
1174
+ corresponds to a feature and carries a trainable weight. An n-tuple system is a system
1175
+ consisting of k n-tuples. As an example we show in Fig. 9 an n-tuple system consisting of
1176
+ four 8-tuples.
1177
+ Let Θ be the vector of all weights θi of the n-tuple system.9 The length of this vector
1178
+ may be large number, e.g. mnk, if all k n-tuples have the same length n and each cell
1179
+ has m positional values. Let Φ(s) be a binary vector of the same length representing the
1180
+ feature occurences in state s (that is, Φi(s) = 1 if in state s the cell of a specific n-tuple as
1181
+ indexed by i has the positional value as indexed by i, Φi(s) = 0 else). The value function
1182
+ of the n-tuple network given state s is
1183
+ V (s) = σ (Φ(s) · Θ)
1184
+ (19)
1185
+ with transfer function σ which may be a sigmoidal function or simply the identity function.
1186
+ 8A typical example is a 2-player board game, where we usually have 3 positional values {0: empty, 1:
1187
+ player1, 2: player2 }. But other, user-defined values are possible as well.
1188
+ 9The index i indexes three qualities: an n-tuple, a cell in this n-tuple and a positional value for this cell.
1189
+ 16
1190
+
1191
+ Figure 9: Example n-tuples: We show 4 random-walk 8-tuples on a 6x7 board. The tuples are
1192
+ selected manually to show that not only snake-like shapes are possible, but also bifurcations
1193
+ or cross shapes. Tuples may or may not be symmetric.
1194
+ An agent using this n-tuple system derives a policy from the value function in Eq. (19)
1195
+ as follows: Given state s and the set A(s) of available actions in state s, it applies with a
1196
+ forward model f every action a ∈ A(s) to state s, yielding the next state s′ = f(s, a). Then
1197
+ it selects the action that maximizes V (s′).
1198
+ Each time a new agent is constructed, all n-tuples are either created in fixed, user-
1199
+ defined positions and shapes, or they are formed by random walk. In a random walk, all
1200
+ cells are placed randomly with the constraint that each cell must be adjacent10 to at least
1201
+ one other cell in the n-tuple.
1202
+ Agent training proceeds in the TD-n-tuple algorithm as follows: Let s′ be the actual
1203
+ state generated by the agent and let s be the previous state generated by this agent. TD(0)
1204
+ learning adapts the value function with model parameters Θ through (Sutton and Barto,
1205
+ 1998)
1206
+ Θ ← Θ + αδ∇ΘV (s)
1207
+ (20)
1208
+ Here, �� is the learning rate and V is in our case the n-tuple value function of Eq. (19). δ is
1209
+ the usual TD error (Sutton and Barto, 1998) after the agent has acted and generated s′:
1210
+ δ = r + γV (s′) − V (s)
1211
+ (21)
1212
+ where the sum of the first two terms, reward r plus the discounted value γV (s′), is the
1213
+ desirable target for V (s).
1214
+ 10The form of adjacency, e. g. 4- or 8-point neighborhood or any other (might be cell-dependent) form of
1215
+ adjacency, is user-defined.
1216
+ 17
1217
+
1218
+ 0
1219
+ 3
1220
+ 5
1221
+ 6
1222
+ 6
1223
+ 5
1224
+ 6
1225
+ 3
1226
+ 4
1227
+ 4
1228
+ 5
1229
+ 6
1230
+ 54
1231
+ N-Tuple Representions for the Cube
1232
+ In order to apply n-tuples to cubes, we have to define a board in one way or the other on
1233
+ which we can place the n-tuples. This is not as straightforward as in other board games, but
1234
+ we are free to invent abstract boards. Once we have defined a board, we can number the
1235
+ board cells k = 0, . . . , K−1 and translate a cube state into a BoardVector: A BoardVector
1236
+ b is a vector of K non-negative integer numbers bk ∈ {0, . . . , Nk − 1}. Each k represents
1237
+ a board cell and every board cell k has a predefined number Nk of position values.11
1238
+ A BoardVector is useful to calculate the feature occurence vector Φ(s) in Eq. (19) for
1239
+ a given n-tuple set: If an n-tuple contains board cell k, then look into bk to get the position
1240
+ value for this cell k. Set Φi(s) = 1 for that index i that indexes this n-tuple cell and this
1241
+ position value.
1242
+ In the following we present different options for boards and BoardVectors. We do this
1243
+ mainly for the 2x2x2 cube, because it is somewhat simpler to explain. But the same ideas
1244
+ apply to the 3x3x3 cube as well, they are just a little bit longer. Therefore, we defer the
1245
+ lengthy details of the 3x3x3 cube to Appendix B.
1246
+ 4.1
1247
+ CUBESTATE
1248
+ A natural way to translate the cube state into a board is to use the flattened representation
1249
+ of Fig. 11 as the board and extract from it the 24-element vector b, according to the given
1250
+ numbering. The kth element bk represents a certain cubie face location and gets a number
1251
+ from {0, . . . , 5} according to its current face color fc. The solved cube is for example
1252
+ represented by b = [0000 1111 2222 . . . 5555].
1253
+ This representation CUBESTATE is what the BoardVecType CUBESTATE in our GBG-
1254
+ implementation means: Each board vector is a copy of fcol, the face colors of all cubie
1255
+ faces. fcol is also the vector that uniquely defines each cube state. An upper bound of
1256
+ possible combinations for b is 624 = 4.7 · 1018. If we factor out the (DRB)-cubie, which
1257
+ always stays at its home position, we can reduce this to 21 board cells with 6 positional
1258
+ values, leading to 621 = 2.1 · 1016 weights. Both numbers are of course way larger than
1259
+ the true number of distinct states (Sec. 2.2.1) which is 3.6 · 106. This is because most of
1260
+ the combinations are dead weights in the n-tuple LUTs, they will never be activated during
1261
+ game play.
1262
+ The dead weights occur because many combinations are not realizable, e.g. three
1263
+ white faces in one cubie or any of the 63 − 8 · 3 = 192 cubie-face-color combinations that
1264
+ are not present in the real cube. The problem is that the dead weights are scattered in a
1265
+ complicated way among the active weights and it is thus not easy to factor them out.
1266
+ 4.2
1267
+ STICKER
1268
+ McAleer et al. (2019) had the interesting idea for the 3x3x3 cube that 20 stickers (cubie
1269
+ faces) are enough. To characterize the full 3x3x3 cube, we need only one (not 2 or 3) sticker
1270
+ 11In GBG package ntuple2 (base for agent TDNTuple3Agt), all Nk have to be the same.
1271
+ In package
1272
+ ntuple4( base for agent TDNTuple4Agt), numbers Nk may be different for different k.
1273
+ 18
1274
+
1275
+ (a) Top view
1276
+ (b) Bottom view
1277
+ Figure 10: The sticker representation used to reduce dimensionality: Stickers that are used
1278
+ are shown in white, whereas ignored stickers are dark blue (from McAleer et al. (2019)).
1279
+ 3
1280
+ 2
1281
+ 0
1282
+ 1
1283
+ 5
1284
+ 4
1285
+ 8
1286
+ 11
1287
+ 18
1288
+ 17
1289
+ 23
1290
+ 22
1291
+ 6
1292
+ 7
1293
+ 9
1294
+ 10
1295
+ 19
1296
+ 16
1297
+ 20
1298
+ 21
1299
+ 14
1300
+ 13
1301
+ 15
1302
+ 12
1303
+ Figure 11: Tracked stickers for the 2x2x2 cube (white), while ignored stickers are blue.
1304
+ for every of the 20 cubies, as shown in Fig. 10. This is because the location of one sticker
1305
+ uniquely defines the location and orientation of that cubie. We name this representation
1306
+ STICKER in GBG.
1307
+ Translated to the 2x2x2 cube, this means that 8 stickers are enough because we have
1308
+ only 8 cubies. We may for example track the 4 top stickers 0,1,2,3 plus the 4 bottom
1309
+ stickers 12,13,14,15 as shown in Fig. 11 and ignore the 16 other stickers. Since we always
1310
+ normalize the cube such that the (DRB)-cubie with sticker 12 stays in place, we can reduce
1311
+ this even more to 7 stickers (all but sticker 12).
1312
+ How to lay out this representation as a board? – McAleer et al. (2019) create a rect-
1313
+ angular one-hot-encoding board with 7 × 21 = 147 cells (7 rows for the stickers and 21
1314
+ columns for the locations) carrying only 0’s and 1’s. This is fine for the approach of McAleer
1315
+ et al. (2019), where they use this board as input for a DNN, but not so nice for n-tuples.
1316
+ Without constraints, such a board amounts to 2147 = 1.7 · 1044 combinations, which is
1317
+ unpleasantly large (much larger than in CUBESTATE).12
1318
+ STICKER has more dead weights than CUBESTATE, so it seems like a step back. But
1319
+ the point is, that the dead weights are better structured: If for example sticker 0 appears at
1320
+ column 1 then this column and the two other columns for the same cubie are automatically
1321
+ 12A possible STICKER BoardVector for the default cube would read b = [1000000 0100000 0010000 . . . ],
1322
+ meaning that location 0 has the first sticker, location 1 has the second sticker, and so on. In any STICKER
1323
+ BoardVector there are only 7 columns carrying exactly one 1, the other carry only 0’s. Every row carries exactly
1324
+ one 1.
1325
+ 19
1326
+
1327
+ Table 7: The correspondence corner location ↔ STICKER2 for the solved cube. The yellow
1328
+ colored cells show the location of the 7 (2x2x2) and 8 (3x3x3) corner stickers that we track.
1329
+ 2x2x2
1330
+ location
1331
+ 0 1 2 3
1332
+ 4
1333
+ 5
1334
+ 6
1335
+ 7
1336
+ 8
1337
+ 9
1338
+ 10 11
1339
+ 12 13 14 15
1340
+ 16 17 18 19
1341
+ 20 21 22 23
1342
+ 3x3x3
1343
+ location
1344
+ 0 2 4 6
1345
+ 8 10 12 14
1346
+ 16 18 20 22
1347
+ 24 26 28 30
1348
+ 32 34 36 38
1349
+ 40 42 44 46
1350
+ STICKER2
1351
+ corner
1352
+ a b c d
1353
+ a
1354
+ d
1355
+ h
1356
+ g
1357
+ a
1358
+ g
1359
+ f
1360
+ b
1361
+ e
1362
+ f
1363
+ g
1364
+ h
1365
+ e
1366
+ c
1367
+ b
1368
+ f
1369
+ e
1370
+ h
1371
+ d
1372
+ c
1373
+ face ID
1374
+ 1 1 1 1
1375
+ 2
1376
+ 3
1377
+ 2
1378
+ 3
1379
+ 3
1380
+ 2
1381
+ 3
1382
+ 2
1383
+ 1
1384
+ 1
1385
+ 1
1386
+ 1
1387
+ 2
1388
+ 2
1389
+ 3
1390
+ 2
1391
+ 3
1392
+ 3
1393
+ 2
1394
+ 3
1395
+ forbidden for all other stickers. Likewise, if sticker 1 is placed in another column, another set
1396
+ of 3 columns is forbidden, and so on. We can use this fact to form a much more compact
1397
+ representation STICKER2.
1398
+ 4.3
1399
+ STICKER2
1400
+ As the analysis in the preceding section has shown, the 21 location columns of STICKER
1401
+ cannot carry the tracked stickers in arbitrary combinations. Each cubie (represented by 3
1402
+ columns in STICKER) carries only exactly one sticker. We can make this fact explicit by
1403
+ choosing another representation for the 21 locations:
1404
+ corner location = (corner cubie, face ID).
1405
+ That is, each location is represented by a pair: corner cubie a,b,c,d,f,g,h (we number the
1406
+ top cubies with letters a,b,c,d and the bottom cubies with letters e,f,g,h and omit e because
1407
+ it corresponds to the (DRB)-cubie) and a face ID. To number the faces with a face ID, we
1408
+ follow the convention that we start at the top (bottom) face with face ID 1 and then move
1409
+ counter-clockwise around the corner cubie to visit the other faces (2,3). Table 7 shows the
1410
+ explicit numbering in this new representation.
1411
+ To represent a state as board vector we use now a much smaller board shown in
1412
+ Table 8: Each cell in the first row has 7 position values (the letters) and each cell in the
1413
+ second row has 3 position values (the face IDs). We show in Table 8 the board vector for
1414
+ the default cube, b = [abcdfgh 1111111]. Representation STICKER2 allows for 77 · 37 =
1415
+ 1.8 · 109 combinations in total, which is much smaller than STICKER and CUBESTATE.
1416
+ Table 8: STICKER2 board representation for the default 2x2x2 cube. For the BoardVector,
1417
+ cells are numbered row-by-row from 0 to 16.
1418
+ corner
1419
+ a
1420
+ b
1421
+ c
1422
+ d
1423
+ f
1424
+ g
1425
+ h
1426
+ 7 positions
1427
+ face ID
1428
+ 1
1429
+ 1
1430
+ 1
1431
+ 1
1432
+ 1
1433
+ 1
1434
+ 1
1435
+ 3 positions
1436
+ STICKER2 has some dead weights remaining, because the combinations can carry
1437
+ the same letter multiple times, which is not allowed for a real cube state. But this rate of
1438
+ dead weights is tolerable.
1439
+ It turns out that STICKER2 is in all aspects better than CUBESTATE or STICKER.
1440
+ Therefore, we will only report the results for STICKER2 in the following.
1441
+ 20
1442
+
1443
+ 4.4
1444
+ Adjacency Sets
1445
+ To create n-tuples by random walk, we need adjacency sets (sets of neighbors) to be
1446
+ defined for every board cell k.
1447
+ For CUBESTATE, the board is the flattened representation of the 2x2x2 cube (Fig. 3).
1448
+ The adjacency set is defined as the 4-point neighborhood, where two stickers are neigh-
1449
+ bors if they share a common edge on the cube, i.e. are neighbors on the cube.
1450
+ For STICKER2, the board consists of 16 cells shown in Table 8. Here, the adjacency
1451
+ set for cell k contains all other cells different from k.
1452
+ Again, the details of ideas similar to Sec. 4.1–4.4, but now for the 3x3x3 cube, are
1453
+ shown in Appendix B.1–B.4.
1454
+ 5
1455
+ Learning the Cube
1456
+ 5.1
1457
+ McAleer and Agostinelli
1458
+ The works of McAleer et al. (2018, 2019) and Agostinelli et al. (2019) contain up to now
1459
+ the most advanced methods for learning to solve the cube from scratch. Agostinelli et al.
1460
+ (2019) introduces the cost-to-go function for a general Marko decision process
1461
+ J(s) = min
1462
+ a∈A(s)
1463
+
1464
+ s′
1465
+ P a(s, s′)
1466
+
1467
+ ga(s, s′) + γJ(s′)
1468
+
1469
+ (22)
1470
+ where P a(s, s′) is the probability of transitioning from state s to s′ by taking action a and
1471
+ ga(s, s′) is the cost for this transition. In the Rubik’s cube case, we have deterministic tran-
1472
+ sitions, that is s′ = f(s, a) is deterministically prescribed by a forward model f. Therefore,
1473
+ the sum reduces to one term and we specialize to γ = 1. Furthermore, we set ga(s, s′) = 1,
1474
+ because only the length of the solution path counts, so that we get the simpler equation
1475
+ J(s) = min
1476
+ a∈A(s)
1477
+
1478
+ 1 + J(s′)
1479
+
1480
+ with
1481
+ s′ = f(s, a).
1482
+ (23)
1483
+ Here, A(s) is the set of available actions in state s. We additionally set J(s∗) = 0 if s∗
1484
+ is the solved cube. To better understand Eq. (23) we look at a few examples: If s1 is a state
1485
+ one twist away from s∗, Eq. (23) will find this twist and set J(s1) = 1. If s2 is a state two
1486
+ twists away from s∗ and all one-twist states have already their correct labels J(s1) = 1,
1487
+ then Eq. (23) will find the twist leading to a s1 state and set J(s2) = 1 + 1 = 2. While
1488
+ iterations proceed, more and more states (being further away from s∗) will be correctly
1489
+ labeled, once their preceding states are correctly labeled. In the end we should ideally
1490
+ have
1491
+ J(sn) = n.
1492
+ However, the number of states for Rubik’s cube is too large to store them all in tabular
1493
+ form. Therefore, McAleer et al. (2019) and Agostinelli et al. (2019) approximate J(s) with
1494
+ a deep neural network (DNN). To train such a network in the Rubik’s cube case, they
1495
+ 21
1496
+
1497
+ Algorithm 1 DAVI algorithm (from Agostinelli et al. (2019)). Input: B: batch size, K:
1498
+ maximum number of twists, M: training iterations, C: how often to check for convergence,
1499
+ ϵ: error threshold. Output: Θ, the trained neural network parameters.
1500
+ 1: function DAVI(B, K, M, C, ϵ)
1501
+ 2:
1502
+ Θ ← INITIALIZENETWORKPARAMETERS
1503
+ 3:
1504
+ ΘC ← Θ
1505
+ 4:
1506
+ for m = 1, . . . , M do
1507
+ 5:
1508
+ X ←GENERATESCRAMBLEDSTATES(B, K)
1509
+ ▷ B scrambled cubes
1510
+ 6:
1511
+ for xi ∈ X do
1512
+ 7:
1513
+ yi ← mina∈A(s) [1 + jΘC(f(xi, a))]
1514
+ ▷ cost-to-go function, Eq. (23)
1515
+ 8:
1516
+ (Θ, loss) ← TRAIN(jΘ, X, y)
1517
+ ▷ loss = MSE(jΘ(xi), yi)
1518
+ 9:
1519
+ if (m mod C = 0 & loss < ϵ) then
1520
+ 10:
1521
+ ΘC ← Θ
1522
+ 11:
1523
+ return Θ
1524
+ introduce Deep Approximate Value Iteration (DAVI)13 shown in Algorithm 1. The network
1525
+ output jΘ(s) is trained in line 8 to approximate the (unknown) cost-to-go J(s) for every
1526
+ state s = xi. The main trick of DAVI is, as Agostinelli et al. (2019) write: „For learning to
1527
+ occur, we must train on a state distribution that allows information to propagate from the
1528
+ goal state to all the other states seen during training. Our approach for achieving this is
1529
+ simple: each training state xi is obtained by randomly scrambling the goal state ki times,
1530
+ where ki is uniformly distributed between 1 and K. During training, the cost-to-go function
1531
+ first improves for states that are only one move away from the goal state. The cost-to-go
1532
+ function then improves for states further away as the reward signal is propagated from the
1533
+ goal state to other states through the cost-to-go function.“
1534
+ Agostinelli et al. (2019) use in Algorithm 1 two sets of parameters to train the DNN: the
1535
+ parameters Θ being trained and the parameters ΘC used to obtain improved estimates
1536
+ of the cost-to-go function. If they did not use this two separate sets, performance often
1537
+ „saturated after a certain point and sometimes became unstable. Updating ΘC only after
1538
+ the error falls below a threshold ϵ yields better, more stable, performance.“ (Agostinelli
1539
+ et al., 2019) To train the DNN, they used M = 1 000 000 iterations, each with batch size
1540
+ B = 10 000. Thus, the trained DNN has seen ten billion cubes (1010) during training, which
1541
+ is still only a small subset of the 4.3 · 1019 possible cube states.
1542
+ The heuristic function of the trained DNN alone cannot solve 100% of the cube states.
1543
+ Especially for higher twist numbers ki, an additional solver or search algorithm is needed.
1544
+ This is in the case of McAleer et al. (2019) a Monte Carlo Tree Search (MCTS), similar to
1545
+ AlphaZero (Silver et al., 2017), which uses the DNN as the source for prior probabilities.
1546
+ Agostinelli et al. (2019) use instead a variant of A∗-search, which is found to produce
1547
+ solutions with a shorter path in a shorter runtime than MCTS.
1548
+ 13More precisely, McAleer et al. (2019) use Autodidactic Iteration (ADI), a precursor to DAVI, very similar to
1549
+ DAVI, just a bit more complicated to explain. Therefore, we describe here only DAVI.
1550
+ 22
1551
+
1552
+ Algorithm 2 TD-n-tuple algorithm for Rubik’s cube. Input: pmax: maximum number of
1553
+ twists, M: training iterations, Etrain: maximum episode length during training, c: nega-
1554
+ tive cost-to-go, Rpos: positive reward for reaching the solved cube s∗, α: learning rate.
1555
+ jΘ(s): n-tuple network value prediction for state s. Output: Θ, the trained n-tuple network
1556
+ parameters.
1557
+ 1: function TDNTUPLE(pmax, M, Etrain, c, Rpos)
1558
+ 2:
1559
+ Θ ← INITIALIZENETWORKPARAMETERS
1560
+ 3:
1561
+ for m = 1, . . . , M do
1562
+ 4:
1563
+ p ∼ U(1, . . . , pmax)
1564
+ ▷ Draw p uniformly random from {1, 2, . . . , pmax}
1565
+ 5:
1566
+ s ← SCRAMBLESOLVEDCUBE(p)
1567
+ ▷ start state
1568
+ 6:
1569
+ for k = 1, . . . , Etrain do
1570
+ 7:
1571
+ snew ← arg max
1572
+ a∈A(s)
1573
+ V (s′)
1574
+ with
1575
+ s′ = f(s, a)
1576
+ and
1577
+ 8:
1578
+ V (s′) = c +
1579
+ � Rpos
1580
+ if
1581
+ s′ = s∗
1582
+ jΘ(s′)
1583
+ if
1584
+ s′ ̸= s∗
1585
+ 9:
1586
+ Train network jΘ with Eq. (20) to bring V (s) closer to target T = V (snew):
1587
+ V (s) ← V (s) + α(T − V (s))
1588
+ 10:
1589
+ s ← snew
1590
+ 11:
1591
+ if (s = s∗) then
1592
+ 12:
1593
+ break
1594
+ ▷ break out of k-loop
1595
+ 13:
1596
+ return Θ
1597
+ 23
1598
+
1599
+ 5.2
1600
+ N-Tuple-based TD Learning
1601
+ To solve the Rubik’s cube in GBG we use an algorithm that is on the one hand inspired by
1602
+ DAVI, but on the other hand more similar to traditional reinforcement learning schemes like
1603
+ temporal difference (TD) learning. In fact, we want to use in the end the same TD-FARL
1604
+ algorithm (Konen and Bagheri, 2021) that we use for all other GBG games.
1605
+ We show in Algorithm 2 our method, that we will explain in the following, highlighting
1606
+ also the similarities and dissimilarities to DAVI.
1607
+ First of all, instead of minimizing the positive cost-to-go as in DAVI, we maximize in
1608
+ lines 7-8 a value function V (s′) with a negative cost-to-go. This maximization is functionally
1609
+ equivalent, but more similar to the usual TD-learning scheme. The negative cost-to-go, e.g.
1610
+ c = −0.1, plays the role of the positive 1 in Eq. (23).
1611
+ Secondly, we replace the DNN of DAVI by the simpler-to-train n-tuple network jΘ with
1612
+ STICKER2 representation as described in Sec. 3 and 4.
1613
+ That is, each time jΘ(s′) is
1614
+ requested, we first calculate for state s′ the BoardVector in STICKER2 representation,
1615
+ then the occurence vector Φ(s′) and the value function V (s′) according to Eq. (19).
1616
+ The central equations for V (s′) in Algorithm 2, lines 7-8, work similar to Eq. (23) in
1617
+ DAVI: If s = s1 is a state one twist away from s∗, the local search in arg max V (s′) will find
1618
+ this twist and the training step in line 9 moves V (s) closer to c+Rpos.14 Likewise, neighbors
1619
+ s2 of s1 will find s1 and thus move V (s2) closer to 2c + Rpos. Similar for s3, s4, . . . under
1620
+ the assumption that a ’known’ state is in the neighborhood. We have a clear gradient on
1621
+ the path towards the solved cube s∗. If there are no ’known’ states in the neighborhood
1622
+ of sn, we get for V (sn) what the net maximally estimates for all those neighbors. We pick
1623
+ the neighbor with the highest estimate, wander around randomly until we hit a state with a
1624
+ ’known’ neighbor or until we reach the limit Etrain of too many steps.
1625
+ Note that Algorithm 2 is different from DAVI insofar that it follows the path s → s′ → . . .
1626
+ as prescribed by the current V , which may lead to a state sequence ’wandering in the
1627
+ unknown’ until Etrain is reached. In contrast to that, DAVI generates many start states s0
1628
+ drawn from the distribution of training set states and trains the network just on pairs (s0, T),
1629
+ i.e. they do just one step on the path. We instead follow the full path, because we want
1630
+ the training method for Rubik’s cube to be as similar as possible to the training method for
1631
+ other GBG games.15
1632
+ Algorithm 2 is basically the same algorithm as GBG uses for other games. The only
1633
+ differences are (i) the cube-specific start state selection borrowed from DAVI (a 1-twist start
1634
+ state has the same probability as a 10-twist start state) and (ii) the cube-specific reward in
1635
+ line 8 of Algorithm 2 with its negative cost-go-go c which is however a common element of
1636
+ many RL rewards.
1637
+ Algorithm 2 currently learns with only one parameter vector Θ. However, it could be
1638
+ extended as in DAVI to two parameter vectors Θ and ΘC. The weight training step in line
1639
+ 14It is relevant, that Rpos is a positive number, e.g. 1.0 (and not 0, as it was for DAVI). This is because we
1640
+ start with an initial n-tuple network with all weights set to 0, so the initial response of the network to any state
1641
+ is 0.0. Thus, if Rpos were 0, a one-twist state would see all its neighbors (including s∗) initially as responding
1642
+ 0.0 and would not learn the right transition to s∗. With Rpos = 1.0 it will quickly find s∗.
1643
+ 15We note in passing that we tested the DAVI variant with Etrain = 1 for our TD-n-tuple method as well.
1644
+ However, we found that this method gave much worse results, so we stick with our GBG method here.
1645
+ 24
1646
+
1647
+ 9 is done with the help of Eq. (20) for Θ using the error signal δ of Eq. (21).
1648
+ There are two extra elements, TCL and MCTS, that complete our n-tuple-based TD
1649
+ learning. They are described in the next two subsections.
1650
+ 5.2.1
1651
+ Temporal Coherence Learning (TCL)
1652
+ The TCL algorithm developed by Beal and Smith Beal and Smith (1999) is an extension
1653
+ of TD learning. It replaces the global learning rate α with the weight-individual product
1654
+ ααi for every weight θi. Here, the adjustable learning rate αi is a free parameter set by a
1655
+ pretty simple procedure: For each weight θi, two counters Ni and Ai accumulate the sum
1656
+ of weight changes and the sum of absolute weight changes. If all weight changes have the
1657
+ same sign, then αi = |Ni|/Ai = 1, and the learning rate stays at its upper bound. If weight
1658
+ changes have alternating signs, then the global learning rate is probably too large. In this
1659
+ case, αi = |Ni|/Ai → 0 for t → ∞, and the effective learning rate will be largely reduced
1660
+ for this weight.
1661
+ In our previous work (Bagheri et al., 2015) we extended TCL to αi = g(|Ni|/Ai) where
1662
+ g is a transfer function being either the identity function (standard TCL) or an exponential
1663
+ function g(x) = eβ(x−1). It was shown in Bagheri et al. (2015) that TCL with this exponential
1664
+ transfer function leads to faster learning and higher win rates for the game ConnectFour.
1665
+ 5.2.2
1666
+ MCTS
1667
+ We use Monte Carlo Tree Search (MCTS) (Browne et al., 2012) to augment our trained
1668
+ network during testing and evaluation. This is the method also used by McAleer et al.
1669
+ (2019) and by AlphaGo Zero (Silver et al., 2017), but they use it also during training.
1670
+ MCTS builds iteratively a search tree starting with a tree containing only the start state
1671
+ s0 as the root node. Until the iteration budget is exhausted, MCTS does the following: In
1672
+ every iteration we start from the root node and select actions following the tree policy until
1673
+ we reach a yet unexpanded leaf node sℓ. The tree policy is implemented in our MCTS
1674
+ wrapper according to the UCB formula (Silver et al., 2017):
1675
+ anew
1676
+ =
1677
+ arg max
1678
+ a∈A(s)
1679
+ �W(s, a)
1680
+ N(s, a) + U(s, a)
1681
+
1682
+ (24)
1683
+ U(s, a)
1684
+ =
1685
+ cpuctP(s, a)
1686
+
1687
+ ε + �
1688
+ b∈A(s) N(s, b)
1689
+ 1 + N(s, a)
1690
+ (25)
1691
+ Here, W(s, a) is the accumulator for all backpropagated values that arrive along branch
1692
+ a of the node that carries state s. Likewise, N(s, a) is the visit counter and P(s, a) the prior
1693
+ probability. A(s) is the set of actions available in state s. ε is a small positive constant for
1694
+ the special case �
1695
+ b N(s, b) = 0: It guarantees that in this special case the maximum of
1696
+ U(s, a) is given by the maximum of P(s, a). The prior probabilities P(s, a) are obtained
1697
+ 25
1698
+
1699
+ Algorithm 3 TD-n-tuple training algorithm. Input: see Algorithm 2. Output: Θ: trained
1700
+ n-tuple network parameters.
1701
+ 1: function TDNTUPLETRAIN(pmax, M, Etrain, c, Rpos)
1702
+ 2:
1703
+ Θ ← INITIALIZENETWORKPARAMETERS
1704
+ 3:
1705
+ INITIALIZETCLPARAMETERS
1706
+ ▷ Set TCL-accumulators Ni = Ai = 0, αi = 1 ∀i
1707
+ 4:
1708
+ for m = 1, . . . , M do
1709
+ 5:
1710
+ Perform one m-iteration of Algorithm 2 with learning rates ααi instead of α
1711
+ 6:
1712
+ Ni ← Ni + ∆θi and Ai ← Ai + |∆θi|
1713
+ ▷ Update TCL-accumulators
1714
+ 7:
1715
+ ▷ where ∆θi is the last term in Eq. (20)
1716
+ 8:
1717
+ αi ← |Ni|/Ai
1718
+ ∀i with Ai ̸= 0
1719
+ 9:
1720
+ return Θ
1721
+ Algorithm 4 Evaluation algorithm with MCTS solver. Input: trained n-tuple network jΘ,
1722
+ p: number of scrambling twists, B: batch size, Eeval: maximum episode length during
1723
+ evaluation, I: number of MCTS-iterations, cPUCT : relative weight for U(s, a) in Eq. (24),
1724
+ dmax: maximum MCTS tree depth. Output: solved rate.
1725
+ 1: function TDNTUPLEEVAL(jΘ, p, B, Eeval, I, cPUCT , dmax)
1726
+ 2:
1727
+ X ←GENERATESCRAMBLEDCUBES(B, p)
1728
+ ▷ B scrambled cubes
1729
+ 3:
1730
+ Csolved ← 0
1731
+ 4:
1732
+ for xi ∈ X do
1733
+ 5:
1734
+ s ← xi
1735
+ 6:
1736
+ for k = 1, . . . , Eeval do
1737
+ 7:
1738
+ T ← PERFORMMCTSSEARCH(s, I, cPUCT , dmax, jΘ)
1739
+ 8:
1740
+ a ← SELECTMOSTVISITEDACTION
1741
+ 9:
1742
+ s ← f(s, a)
1743
+ 10:
1744
+ if (s = s∗) then
1745
+ 11:
1746
+ Csolved ← Csolved + 1
1747
+ 12:
1748
+ break
1749
+ ▷ break out of k-loop
1750
+ 13:
1751
+ return Csolved/B
1752
+ ▷ percentage solved
1753
+ by sending the trained network’s values of all follow-up states s′ = f(s, a) with a ∈ A(s)
1754
+ through a softmax function (see Sec. 3).16
1755
+ Once an unexpanded leaf node sℓ is reached, the node is expanded by initializing
1756
+ its accumulators: W(s, a) = N(s, a) = 0 and P(s, a) = ps′ where ps′ is the softmax-
1757
+ squashed output jΘ(s′) of our n-tuple network for each state s′ = f(s, a). The value of
1758
+ the node is the network output of the best state jΘ(sbest) = maxs′ jΘ(s′) and this value is
1759
+ backpropagated up the tree.
1760
+ More details on our MCTS wrapper can be found in Scheiermann and Konen (2022).
1761
+ 16Note that the prior probabilities and the MCTS iteration are only needed at test time, so that we – different
1762
+ to AlphaZero – do not need MCTS during self-play training.
1763
+ 26
1764
+
1765
+ 5.2.3
1766
+ Method Summary
1767
+ We summarize the different ingredients of our n-tuple-based TD learning method in Algo-
1768
+ rithm 3 (training) and Algorithm 4 (evaluation).
1769
+ In line 5 of Algorithm 3 we perform one m-iteration of Algorithm 2 which does an update
1770
+ step for weight vector Θ, see Eq. (20). All weights of activated n-tuple entries get a weight
1771
+ change ∆θi equal to the last term in Eq. (20) where the global α is replaced by ααi.
1772
+ Line 2 in Algorithm 4 generates a set X of B scrambled cube states. Line 7 builds for
1773
+ each xi ∈ X an MCTS tree (see Sec. 5.2.2) starting from root node xi and line 8 selects
1774
+ the most visited action of the root node. If the goal state s∗ is not found during Eeval k-loop
1775
+ trials, this xi is considered as not being solved.
1776
+ 6
1777
+ Results
1778
+ 6.1
1779
+ Experimental setup
1780
+ We use for all our GBG experiments the same RL method based on n-tuple systems and
1781
+ TCL. Only its hyperparameters are tuned to the specific game, as shown below. We refer
1782
+ to this method/agent as TCL-base whenever it alone is used for game playing. If we wrap
1783
+ such an agent by an MCTS wrapper with a given number of iterations, then we refer to this
1784
+ as TCL-wrap.
1785
+ We investigate two variants of Rubik’s Cube: 2x2x2 and 3x3x3. We trained all TCL
1786
+ agents by presenting them M = 3 000 000 cubes scrambled with p random twists, where
1787
+ p is chosen uniformly at random from {1, . . . , pmax}. Here, pmax = 13 [16] for 2x2x2 and
1788
+ pmax = 9 [13] for 3x3x3, where the first number is for HTM, while the second number
1789
+ in square brackets is for QTM. With these pmax cube twists we cover the complete cube
1790
+ space for 2x2x2, where God’s number (Sec. 2.2) is known to be 11 [14]. But we cover only
1791
+ a small subset in the 3x3x3 case, where God’s number is known to be 20 [26] (Rokicki
1792
+ et al., 2014).17 We train 3 agents for each cube variant { 2x2x2, 3x3x3 } × { HTM, QTM }
1793
+ to assess the variability of training.
1794
+ The hyperparameters of the agent for each cube variant were found by manual fine-
1795
+ tuning. For brevity, we defer the exact explanation and setting of all parameters to Ap-
1796
+ pendix C.
1797
+ We evaluate the trained agents for each p on 200 scrambled cubes that are created by
1798
+ applying the given number p of random scrambling twists to a solved cube. The agent now
1799
+ tries to solve each scrambled cube. A cube is said to be unsolved during evaluation if the
1800
+ agent cannot reach the solved cube in Eeval = 50 steps.18
1801
+ 17We limit ourselves to pmax = 9 [13] in the 3x3x3 HTM [QTM ] case, because our network has not enough
1802
+ capacity to learn all states of the 3x3x3 Rubik’s cube. Experiments with higher twist numbers during training
1803
+ did not improve the solved-rates.
1804
+ 18During training, we use lower maximum episode lengths Etrain (see Appendix C) than Eeval = 50 in
1805
+ order to reduce computation time (in the beginning, many episodes cannot be solved, and 50 would waste a
1806
+ lot of computation time). But Etrain is always at least pmax + 3 in order to ensure that the agent has a fair
1807
+ chance to solve the cube and collect the reward.
1808
+ 27
1809
+
1810
+ HTM
1811
+ 0%
1812
+ 25%
1813
+ 50%
1814
+ 75%
1815
+ 100%
1816
+ 1
1817
+ 3
1818
+ 5
1819
+ 7
1820
+ 9
1821
+ 11
1822
+ 13
1823
+ scrambling twists
1824
+ percentage solved
1825
+ cubeWidth
1826
+ 2x2x2
1827
+ 3x3x3
1828
+ iterMWrap
1829
+ 0
1830
+ 100
1831
+ 800
1832
+ QTM
1833
+ 0%
1834
+ 25%
1835
+ 50%
1836
+ 75%
1837
+ 100%
1838
+ 1
1839
+ 3
1840
+ 5
1841
+ 7
1842
+ 9
1843
+ 11
1844
+ 13
1845
+ 15
1846
+ scrambling twists
1847
+ percentage solved
1848
+ cubeWidth
1849
+ 2x2x2
1850
+ 3x3x3
1851
+ iterMWrap
1852
+ 0
1853
+ 100
1854
+ 800
1855
+ Figure 12: Percentage of solved cubes as a function of scrambling twists p for the trained
1856
+ TD-N-tuple agent wrapped by MCTS wrapper with different numbers of iterations. The red
1857
+ curves are TCL-base without wrapper, the other colors show different forms of TCL-wrap.
1858
+ Twist type is HTM (left) and QTM (right). Each point is the average of 3 independently trained
1859
+ agents.
1860
+ 6.2
1861
+ Cube Solving with MCTS Wrapper, without Symmetries
1862
+ The trained TD-N-tuple agents learn to solve the cubes to some extent, as the red curves
1863
+ TCL-base in Fig. 12 show, but they are in many cases (i.e. p > pmax/2) far from being
1864
+ perfect. These are the results from training each agent for 3 million episodes, but the
1865
+ results would not change considerably, if 10 million training episodes were used.
1866
+ Scheiermann and Konen (2022) have shown, that the performance of agents, namely
1867
+ TD-N-tuple agents, is largely improved, if the trained agents are wrapped during test, play
1868
+ and evaluation by an MCTS wrapper. This holds for Rubik’s cube as well, as Fig. 12 shows:
1869
+ For the 2x2x2 cube, the non-wrapped agent TCL-base (red curve) is already quite good,
1870
+ but with wrapping it becomes almost perfect. For the 3x3x3 cube, the red curves are not
1871
+ satisfactorily: the solved-rates are below 20% for p = 9 [13] in the HTM [QTM ] case. But
1872
+ at least MCTS wrapping boosts the solved-rates by a factor of 3 [QTM: from 16% to 48%]
1873
+ or 4.5 [HTM: from 10% to 45%].
1874
+ All these results are without incorporating symmetries.
1875
+ How symmetries affect the
1876
+ solved-rates will be investigated in Sec. 6.4. But before this, we look in the next section at
1877
+ the number of symmetries that effectively exist in a cube state.
1878
+ 6.3
1879
+ Number of Symmetric States
1880
+ Not every cube state has 24 truly different symmetric states (24 = number of color sym-
1881
+ metries). For example in the solved cube, all color-symmetric states are the same (after
1882
+ normalization). Thus, we have here only one truly different symmetric state.
1883
+ However, we show in this section that for the majority of cube states the number of
1884
+ truly different symmetric states is close to 24. Two states are truly different if they are
1885
+ not the same after the normalizing operation. We generate a cube state by applying p
1886
+ random scrambling twists to the default cube. Now we apply all 24 color transformations
1887
+ (Sec. 2.4.3) to it and count the truly different states. The results are shown in Fig. 13 for
1888
+ 28
1889
+
1890
+ twistType: HTM
1891
+ twistType: QTM
1892
+ 0
1893
+ 4
1894
+ 8
1895
+ 12
1896
+ 16
1897
+ 0
1898
+ 4
1899
+ 8
1900
+ 12
1901
+ 16
1902
+ 0
1903
+ 5
1904
+ 10
1905
+ 15
1906
+ 20
1907
+ 25
1908
+ scrambling twists
1909
+ NsymmetricStates
1910
+ cubeWidth
1911
+ 2x2x2
1912
+ 3x3x3
1913
+ Figure 13: Count of truly different symmetric states for cube states generated by p random
1914
+ scrambling twists. Each point is an average over 500 such states.
1915
+ both cube sizes and both twist types. For the 3x3x3 cube, the number of states quickly (for
1916
+ p > 5) approaches the maximum N = 24, while for the 2x2x2 cube it is a bit slower: p > 4
1917
+ or p > 8 is needed to surpass N = 20.
1918
+ As a consequence, it makes sense to use 16 or even 24 symmetries when training
1919
+ and evaluating cube agents. Especially for scrambled states with higher p, the 24 color
1920
+ transformations used to construct symmetric states will usually lead to 24 different states.
1921
+ 6.4
1922
+ The Benefit of Symmetries
1923
+ In order to investigate the benefits of symmetries, we first train a TCL agent with dif-
1924
+ ferent numbers of symmetries. As described in Sec. 2.5, we select in each step nSym
1925
+ = 0, 8, 16, 24 symmetric states. Which symmetric states are chosen is selected randomly.
1926
+ Symmetries are used (a) to update the weights for each symmetric state and (b) to build
1927
+ with Eq. (18) a smoothed value function which is used to decide about the next action dur-
1928
+ ing training. For 0, 8, 16, 24 symmetries, we train 3 agents each (3x3x3 cube, STICKER2,
1929
+ QTM). The 3 agents differ due to their differently created random-walk n-tuple sets.
1930
+ Fig. 14 shows the learning curves for different nSym = 0, 8, 16, 24. It is found that agents
1931
+ with nSym > 0 learn faster and achieve a higher asymptotic solved rate.
1932
+ Next, we evaluate each of the trained agents by trying to solve for each p ∈ {1, . . . , 15}
1933
+ (scrambling twists) 200 different scrambled cubes. During evaluation, we use again the
1934
+ same nSym as in training to form a smoothed value function. We compare in Fig. 15 different
1935
+ 29
1936
+
1937
+ 50%
1938
+ 60%
1939
+ 70%
1940
+ 80%
1941
+ 90%
1942
+ 0e+00
1943
+ 1e+06
1944
+ 2e+06
1945
+ 3e+06
1946
+ episodes
1947
+ percentage solved
1948
+ nSym
1949
+ 24
1950
+ 16
1951
+ 8
1952
+ 0
1953
+ Figure 14: Learning curves for different numbers nSym = 0, 8, 16, 24 of symmetries. Shown is
1954
+ the solved rate of (3x3x3, QTM) cubes. The solved rate is the average over all twist numbers
1955
+ p = 1, . . . , 13 with 200 testing cubes for each p and over 3 agents with different random-walk
1956
+ n-tuple sets.
1957
+ symmetry results, both without wrapping (TCL-base, red curves) and with MCTS-wrapped
1958
+ agents using 100 (green) or 800 (blue) iterations. It is clearly visible that MCTS wrapping
1959
+ has a large effect, as it was also the case in Fig 12. But in addition to that, the use of
1960
+ symmetries leads for each agent, wrapped or not, to a substantial increase in solved-rates
1961
+ (a surplus of 10-20%). It is remarkable, that even for p=14 or 15 a solved rate above or
1962
+ near 50% can be reached19 by the combination (nSym=16, 800 MCTS iterations).
1963
+ Surprisingly, it seems that with wrapping it is only important whether we use symme-
1964
+ tries, not how many, since the difference between nSym = 8, 16, 24 is only marginal. For
1965
+ 800 MCTS iterations, the solved rate for nSym = 24 is in most cases even smaller than that
1966
+ for nSym = 8, 16. This is surprising because it would have been expected that also with
1967
+ wrapping a larger nSym should lead to a smoother value function and thus should in theory
1968
+ produce larger solved rates. – Note that this is not a contradiction to Fig. 14, because the
1969
+ learning curves were obtained without wrapping and the red TCL-base curves in Fig. 15
1970
+ (again without wrapping) show the same positive trend with increasing nSym20. The red
1971
+ curves in Fig. 15 show approximately the same average solved rates as the asymptotic
1972
+ values in Fig. 14.
1973
+ 6.5
1974
+ Computational Costs
1975
+ Table 9 shows the computational costs when training and testing with symmetries. All
1976
+ computations were done on a single CPU Intel i7-9850H @ 2.60GHz. If we subtract the
1977
+ 19p is above pmax=13, the maximum twist number used during training.
1978
+ 20i.e. nSym= 24 is for every p clearly better than nSym= 16
1979
+ 30
1980
+
1981
+ QTM
1982
+ 3x3x3
1983
+ 0%
1984
+ 25%
1985
+ 50%
1986
+ 75%
1987
+ 100%
1988
+ 1
1989
+ 3
1990
+ 5
1991
+ 7
1992
+ 9
1993
+ 11
1994
+ 13
1995
+ 15
1996
+ scrambling twists
1997
+ percentage solved
1998
+ nSym
1999
+ 0
2000
+ 8
2001
+ 16
2002
+ 24
2003
+ iterMWrap
2004
+ 0
2005
+ 100
2006
+ 800
2007
+ Figure 15: With symmetries: Percentage of solved cubes (3x3x3, QTM) as a function of
2008
+ scrambling twists p for TD-N-tuple agents trained and evaluated with different numbers of
2009
+ symmetries nSym and wrapped by MCTS wrappers with different iterations. The red curves
2010
+ are TCL-base (without wrapper), the other colors show different forms of TCL-wrap. The
2011
+ solved rates are the average over 200 testing cubes for each p and over 3 agents with differ-
2012
+ ent random-walk n-tuple sets.
2013
+ computational costs for nsym= 0, computation time increases more or less linearly with
2014
+ iter and roughly linearly with nSym. Computation times for nSym= 24 are approximately
2015
+ 10x larger than those for nSym= 0.
2016
+ Computation times are dependent on the solved rate: If a cube with p = 13 is solved,
2017
+ the episode takes normally 12-15 steps. If the cube is not solved, the episode needs 50
2018
+ steps, i.e. a factor of 3-4 more. Thus, the numbers in Table 9 should be taken only as
2019
+ rough indication of the trend.
2020
+ Bottom line: Training time through symmetries increases by a factor of 13/0.5 = 26
2021
+ (nSym= 24) and testing time increases through 800 MCTS iterations by a factor of about
2022
+ 3130/8 ≈ 400.
2023
+ Training with symmetries takes between 5.4h and 13h on a normal CPU, depending
2024
+ on the number of symmetries. This is much less than the 44h on a 32-core server with 3
2025
+ GPUs that were used by McAleer et al. (2019). But it also does not reach the same quality
2026
+ as McAleer et al. (2019).
2027
+ 31
2028
+
2029
+ Table 9: Computation times with symmetries. All numbers are for 3x3x3 cube, STICKER2
2030
+ and QTM. Training: 3 million self-play episodes, w/o MCTS in the training loop. Testing: 200
2031
+ scrambled cubes with p = 13, agents wrapped by MCTS wrapper with iter iterations.
2032
+ nSym
2033
+ training
2034
+ testing
2035
+ [hours]
2036
+ [seconds]
2037
+ iter
2038
+ 0
2039
+ 100
2040
+ 400
2041
+ 800
2042
+ 0
2043
+ 0.5
2044
+ 0.5
2045
+ 48
2046
+ 196
2047
+ 390
2048
+ 8
2049
+ 5.4
2050
+ 4.0
2051
+ 241
2052
+ 877
2053
+ 1400
2054
+ 16
2055
+ 9.5
2056
+ 7.3
2057
+ 464
2058
+ 1380
2059
+ 2330
2060
+ 24
2061
+ 13.0
2062
+ 8.0
2063
+ 550
2064
+ 1760
2065
+ 3130
2066
+ 7
2067
+ Related Work
2068
+ Ernö Rubik invented Rubik’s cube in 1974. Rubik’s cube has gained worldwide popularity
2069
+ with many human-oriented algorithms being developed to solve the cube from arbitrary
2070
+ scrambled start states. By ’human-oriented’ we mean algorithms that are simple to mem-
2071
+ orize for humans. They usually will find long, suboptimal solutions. For a long time it was
2072
+ an unsolved question what is the minimal number of moves (God’s Number) needed to
2073
+ solve any given cube state. The early work of Thistlethwaite (1981) put an upper bound on
2074
+ this number with his 52-move algorithm. This was one of the first works to systematically
2075
+ use group theory as an aid to solve Rubik’s cube. Later, several authors have gradually
2076
+ reduced the upper bound 52 (Joyner, 2014), until Rokicki et al. (2014) could prove in 2014
2077
+ for the 3x3x3 cube that God’s Number is 20 in HTM and 26 in QTM.
2078
+ Computer algorithms to solve Rubik’s cube rely often on hand-engineered features and
2079
+ group theory. One popular solver for Rubik’s cube is the two-phase algorithm of Kociemba
2080
+ (2015). A variant of A∗ heuristic search was used by Korf (1991), along with a pattern
2081
+ database heuristic, to find the shortest possible solutions.
2082
+ The problem of letting a computer learn to solve Rubik’s cube turned out to be much
2083
+ harder: Irpan (2016) experimented with different neural net baseline architectures (LSTM
2084
+ gave for him reportedly best results) and tried to boost them with AdaBoost. However, he
2085
+ had only for scrambling twist ≤ 7 solved rates of better than 50% and the baseline turned
2086
+ out to be better than the boosted variants. Brunetto and Trunda (2017) found somewhat
2087
+ better results with a DNN, they could solve cube states with 18 twists with a rate above
2088
+ 50%. But they did not learn from scratch because they used an optimal solver based
2089
+ on Kociemba (2015) to generate training examples for the DNN. Smith et al. (2016) tried
2090
+ to learn Rubik’s cube by genetic programming. However, their learned solver could only
2091
+ reliably solve cubes with up to 5 scrambling twists.
2092
+ A breakthrough in learning to solve Rubik’s cube are the works of McAleer et al. (2018,
2093
+ 2019) and Agostinelli et al. (2019): With Autodidactic Iteration (ADI) and Deep Approxi-
2094
+ mate Value Iteration (DAVI) they were able to learn from scratch to solve Rubik’s cube in
2095
+ QTM for arbitrary scrambling twists. Their method has been explained in detail already
2096
+ in Sec. 5.1, so we highlight here only their important findings: McAleer et al. (2019) only
2097
+ needs to inspect less than 4000 cubes with its trained network DeepCube when solving
2098
+ 32
2099
+
2100
+ for a particular cube, while the optimal solver of Korf (1991) inspects 122 billion different
2101
+ nodes, so Korf’s method is much slower.
2102
+ Agostinelli et al. (2019) extended the work of McAleer et al. (2019) by replacing the
2103
+ MCTS solver with a batch-weighted A∗ solver which is found to produce shorter solution
2104
+ paths and have shorter run times. At the same time, Agostinelli et al. (2019) applied their
2105
+ agent DeepCubeA successfully to other puzzles like LightsOut, Sokoban, and the 15-, 24-,
2106
+ 35- and 48-puzzle21. DeepCubeA could solve all of them.
2107
+ The deep network used by McAleer et al. (2019) and Agostinelli et al. (2019) were
2108
+ trained without human knowledge or supervised input from computerized solvers. The
2109
+ network of McAleer et al. (2019) had over 12 million weights and was trained for 44 hours
2110
+ on a 32-core server with 3 GPUs. The network of McAleer et al. (2019) has seen 8 billion
2111
+ cubes during training. – Our approach started from scratch as well. It required much less
2112
+ computational effort (e.g. 5.4h training time on a single standard CPU for nSym=8, see
2113
+ Table 9). It can solve the 2x2x2 cube completely, but the 3x3x3 cube only partly (up to 15
2114
+ scrambling twists). Each trained agent for the 3x3x3 cube has seen 48 million scrambled
2115
+ cubes22 during training.
2116
+ 8
2117
+ Summary and Outlook
2118
+ We have presented new work on how to solve Rubik’s cube with n-tuple systems, reinforce-
2119
+ ment learning and an MCTS solver. The main ideas were already presented in Scheier-
2120
+ mann and Konen (2022) but only for HTM and up to p = 9 twists. Here we extended
2121
+ this work to QTM as well and presented all the details of cube representation and n-tuple
2122
+ learning algorithms necessary to reproduce our Rubik’s cube results. As a new aspect,
2123
+ we added cube symmetries and studied their effect on solution quality. We found that the
2124
+ use of symmetries boosts the solved rates by 10-20%. Based on this, we could increase
2125
+ for QTM the number of scrambling twists where at least 45% of the cubes are solved from
2126
+ p = 13 without symmetries to p = 15 with symmetries.
2127
+ We cannot solve the 3x3x3 cube completely, as McAleer et al. (2019) and Agostinelli
2128
+ et al. (2019) do. But our solution is much less computational demanding than their ap-
2129
+ proach.
2130
+ Further work might be to look into larger or differently structured n-tuple systems, per-
2131
+ haps utilizing the staging principle that Ja´skowski (2018) used to produce world-record
2132
+ results in the game 2048.
2133
+ 21a set of 15, 24, ... numbers has to be ordered on a 4 × 4, 5 × 5, ... square with one empty field
2134
+ 223 · 106 × 16 = training episodes × episode length Etrain. This is an upper bound: some episodes may
2135
+ have shorter length, but each unsolved episode has length Etrain.
2136
+ 33
2137
+
2138
+ References
2139
+ F. Agostinelli, S. McAleer, A. Shmakov, and P. Baldi. Solving the Rubik’s cube with deep
2140
+ reinforcement learning and search. Nature Machine Intelligence, 1(8):356–363, 2019.
2141
+ 1, 4, 5, 21, 22, 32, 33
2142
+ S. Bagheri, M. Thill, P. Koch, and W. Konen. Online adaptable learning rates for the game
2143
+ Connect-4. IEEE Transactions on Computational Intelligence and AI in Games, 8(1):
2144
+ 33–42, 2015. 25
2145
+ D. F. Beal and M. C. Smith. Temporal coherence and prediction decay in TD learning. In
2146
+ T. Dean, editor, Int. Joint Conf. on Artificial Intelligence (IJCAI), pages 564–569. Morgan
2147
+ Kaufmann, 1999. ISBN 1-55860-613-0. 25
2148
+ W. W. Bledsoe and I. Browning. Pattern recognition and reading by machine. In Proceed-
2149
+ ings of the Eastern Joint Computer Conference, pages 225–232, 1959. 16
2150
+ C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen,
2151
+ S. Tavener, D. Perez, S. Samothrakis, and S. Colton.
2152
+ A survey of Monte Carlo tree
2153
+ search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4
2154
+ (1):1–43, 2012. 25
2155
+ R. Brunetto and O. Trunda. Deep heuristic-learning in the Rubik’s cube domain: An experi-
2156
+ mental evaluation. In ITAT, pages 57–64, 2017. URL http://ceur-ws.org/Vol-1885/
2157
+ 57.pdf. 32
2158
+ A. Irpan. Exploring boosted neural nets for Rubik’s cube solving. Technical report, Univer-
2159
+ sity of California, 2016. URL https://www.alexirpan.com/public/research/nips_
2160
+ 2016.pdf. 32
2161
+ W. Ja´skowski.
2162
+ Mastering 2048 with delayed temporal coherence learning, multistage
2163
+ weight promotion, redundant encoding, and carousel shaping. IEEE Transactions on
2164
+ Games, 10(1):3–14, 2018. 33
2165
+ D. Joyner. The man who found God’s number. The College Mathematics Journal, 45(4):
2166
+ 258–266, 2014. 32
2167
+ H. Kociemba. The two-phase-algorithm, 2015. URL http://kociemba.org/twophase.
2168
+ htm. Details in http://kociemba.org/math/imptwophase.htm, retrieved Sep-01-2022. 32
2169
+ W. Konen. General board game playing for education and research in generic AI game
2170
+ learning.
2171
+ In D. Perez, S. Mostaghim, and S. Lucas, editors, Conference on Games
2172
+ (London), pages 1–8, 2019. URL https://arxiv.org/pdf/1907.06508. 5
2173
+ W. Konen. The GBG class interface tutorial V2.3: General board game playing and learn-
2174
+ ing. Technical report, TH Köln, 2022. URL http://www.gm.fh-koeln.de/ciopwebpub/
2175
+ Konen22a.d/TR-GBG.pdf. 5, 42
2176
+ 34
2177
+
2178
+ W. Konen and S. Bagheri. Reinforcement learning for n-player games: The importance
2179
+ of final adaptation. In 9th International Conference on Bioinspired Optimisation Meth-
2180
+ ods and Their Applications (BIOMA), Nov. 2020. URL http://www.gm.fh-koeln.de/
2181
+ ciopwebpub/Konen20b.d/bioma20-TDNTuple.pdf. 5
2182
+ W. Konen and S. Bagheri. Final adaptation reinforcement learning for n-player games.
2183
+ arXiv preprint arXiv:2111.14375, 2021. 24, 40, 41
2184
+ R. E. Korf. Multi-player alpha-beta pruning. Artificial Intelligence, 48(1):99–111, 1991. 32,
2185
+ 33
2186
+ S. M. Lucas. Learning to play Othello with n-tuple systems. Australian Journal of Intelligent
2187
+ Information Processing, 4:1–20, 2008. 16
2188
+ S. McAleer, F. Agostinelli, A. Shmakov, and P. Baldi. Solving the rubik’s cube without human
2189
+ knowledge. arXiv preprint arXiv:1805.07470, 2018. 1, 4, 21, 32
2190
+ S. McAleer, F. Agostinelli, A. Shmakov, and P. Baldi. Solving the Rubik’s cube with approx-
2191
+ imate policy iteration. In International Conference on Learning Representations, 2019.
2192
+ URL https://openreview.net/pdf?id=Hyfn2jCcKm. 1, 4, 5, 18, 19, 21, 22, 25, 31,
2193
+ 32, 33, 39
2194
+ V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
2195
+ M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep
2196
+ reinforcement learning. nature, 518(7540):529–533, 2015. 4
2197
+ T. Rokicki, H. Kociemba, M. Davidson, and J. Dethridge. The diameter of the Rubik’s Cube
2198
+ group is twenty. siam REVIEW, 56(4):645–670, 2014. 7, 27, 32
2199
+ J. Scheiermann and W. Konen. AlphaZero-inspired game learning: Faster training by using
2200
+ MCTS only at test time. IEEE Transactions on Games, 2022. doi: 10.1109/TG.2022.
2201
+ 3206733. URL https://ieeexplore.ieee.org/document/9893320. 5, 26, 28, 33, 41
2202
+ D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrit-
2203
+ twieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of Go
2204
+ with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. 4
2205
+ D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert,
2206
+ L. Baker, M. Lai, A. Bolton, et al. Mastering the game of Go without human knowledge.
2207
+ Nature, 550(7676):354–359, 2017. 22, 25
2208
+ R. J. Smith, S. Kelly, and M. I. Heywood. Discovering Rubik’s cube subgroups using coevo-
2209
+ lutionary GP: A five twist experiment. In Proceedings of the Genetic and Evolutionary
2210
+ Computation Conference (GECCO), pages 789–796, 2016. 32
2211
+ R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cam-
2212
+ bridge, MA, 1998. 17
2213
+ 35
2214
+
2215
+ M. Thistlethwaite.
2216
+ Thistlethwaites’s 52-move algorithm, 1981.
2217
+ URL https://www.
2218
+ jaapsch.net/puzzles/thistle.htm.
2219
+ Reconstructed by Jaap Scherphuis, retrieved
2220
+ Sep-01-2022. 32
2221
+ Wikipedia. Pocket Cube, 2022a. URL https://en.wikipedia.org/wiki/Pocket_Cube.
2222
+ retrieved Aug-17-2022. 7
2223
+ Wikipedia.
2224
+ Rubik’s Cube, 2022b.
2225
+ URL https://en.wikipedia.org/wiki/Rubik’s_
2226
+ Cube. retrieved Aug-17-2022. 7
2227
+ 36
2228
+
2229
+ Appendix
2230
+ A
2231
+ Calculating sloc from fcol
2232
+ Given the face colors fc (Eq. (3)) of a transformed cube, how can we calculate the trans-
2233
+ formed sticker locations sℓ (Eq. (4))?
2234
+ This problem seems ill-posed at first sight, because a certain face color, e.g. white,
2235
+ appears multiple times in fc and it is not possible to tell from the appearance of white alone
2236
+ to which sticker location sℓ it corresponds. But with a little more effort, i.e. by looking at the
2237
+ neighbors of the white sticker, we can solve the problem, as we show in the following.
2238
+ A.1
2239
+ 2x2x2 cube
2240
+ All cubies of the 2x2x2 cube are corner cubies. We track for each cubie exactly one sticker.
2241
+ This can be for example the set
2242
+ B = {0, 1, 2, 3, 12, 13, 14, 15}
2243
+ of 8 stickers, which is the same as the set of tracked stickers shown in Fig. 11.
2244
+ For each s ∈ B:
2245
+ 1. Build the cubie that contains s as the first sticker.23
2246
+ 2. Locate the cubie in fc. That is, find a location in fc with the same color as the 1st
2247
+ cubie face. If found, check if the neighbor to the right24 has the color of the 2nd cubie
2248
+ face. If yes, check if its neighbor to the right has the color of the 3rd cubie face. If
2249
+ yes, we have located the cubie in fc and we return it, i.e. its three sticker locations
2250
+ C = [a, b, c].
2251
+ 3. Having located the cubie, we can infer three elements of sℓ:
2252
+ sℓ[s]
2253
+ =
2254
+ C[0]
2255
+ (26)
2256
+ sℓ[R[s]]
2257
+ =
2258
+ C[1]
2259
+ (27)
2260
+ sℓ[R[R[s]]]
2261
+ =
2262
+ C[2]
2263
+ (28)
2264
+ Here R[s] is the right neighbor of sticker s. R[R[s]]] is the left neighbor.
2265
+ In total, we have located 8 × 3 = 24 stickers, i.e. the whole transformation for sℓ.25
2266
+ 23We know for example from looking at the default cube in Fig. 11 that sticker s = 0 is part of the 0-8-4-cubie.
2267
+ 24By neighbor to the right we mean the next sticker when we march in clockwise orientation around the
2268
+ actual cubie.
2269
+ 25The relevant GBG source code is in CubeState.locate and CubeState2x2.apply_sloc_slow.
2270
+ 37
2271
+
2272
+ A.2
2273
+ 3x3x3 cube
2274
+ The 3x3x3 cube has 8 corner cubies and 12 edge cubies. We track for each cubie exactly
2275
+ one sticker. This can be for the corners the set
2276
+ B = {0, 2, 4, 6, 24, 26, 28, 30}
2277
+ and for the edges the set
2278
+ E = {1, 3, 5, 7, 25, 27, 29, 31, 11, 15, 21, 33}.
2279
+ We do for the corner set B the same as we did for the 2x2x2 cube.
2280
+ For each element s ∈ E of the edge set:
2281
+ 1. Build the edge cubie cE that contains s as the first sticker.
2282
+ 2. Locate the cubie in fc. That is, find an edge location in fc with the same color as the
2283
+ 1st cubie face. If found, check if the other sticker of that cubie has the same color as
2284
+ the other sticker of cE. If yes, we have located the edge cubie in fc and we return it,
2285
+ i.e. its two stickers C = [a, b].
2286
+ 3. Having located the cubie, we can infer two elements of sℓ:
2287
+ sℓ[s]
2288
+ =
2289
+ C[0]
2290
+ (29)
2291
+ sℓ[O[s]]
2292
+ =
2293
+ C[1]
2294
+ (30)
2295
+ Here O[s] is the other sticker of the edge cubie that has sticker s as first sticker.
2296
+ In total, we have located
2297
+ 8 × 3 + 12 × 2 = 48
2298
+ stickers, i.e. the whole transformation for sℓ.26
2299
+ B
2300
+ N-Tuple Representations for the 3x3x3 Cube
2301
+ In this appendix we describe the n-tuple representations of the cube, analogously to the
2302
+ 2x2x2 cube Sec. 4, but now for the 3x3x3 cube.
2303
+ B.1
2304
+ CUBESTATE
2305
+ A natural way to translate the cube state into a board is to use the flattened representation
2306
+ of Fig. 4 as the board and extract from it the 48-element vector b, according to the given
2307
+ numbering. The kth element bk represents a certain cubie face location and gets a number
2308
+ from {0, . . . , 5} according to its current face color fc. The solved cube is for example
2309
+ represented by b = [00000000 11111111 . . . 55555555].
2310
+ This representation CUBESTATE is what the BoardVecType CUBESTATE in our GBG-
2311
+ implementation means: Each board vector is a copy of fcol, the face colors of all cubie
2312
+ faces. An upper bound of possible combinations for b is 648 = 2.2 · 1032. This is much
2313
+ larger than the true number of distinct states (Sec. 2.2.2) which is 4.3 · 1019.
2314
+ 26The relevant GBG source code is in CubeState.locate, CubeState3x3.locate_edge and CubeState3x3.apply_sloc_slow.
2315
+ 38
2316
+
2317
+ Table 10: The correspondence edge location ↔ STICKER2 for the solved cube. The yellow
2318
+ colored cells show the location of the 12 edge stickers that we track.
2319
+ 3x3x3
2320
+ location
2321
+ 1 3 5
2322
+ 7
2323
+ 9 11 13 15
2324
+ 17 19 21 23
2325
+ 25 27 29 31
2326
+ 33 35 37 39
2327
+ 41 43 45 47
2328
+ STICKER2
2329
+ edge
2330
+ A B C D
2331
+ D
2332
+ G
2333
+ K
2334
+ E
2335
+ E
2336
+ J
2337
+ F
2338
+ A
2339
+ I
2340
+ J
2341
+ K
2342
+ L
2343
+ H
2344
+ B
2345
+ F
2346
+ I
2347
+ L
2348
+ G
2349
+ C
2350
+ H
2351
+ face ID
2352
+ 1 1 1
2353
+ 1
2354
+ 2
2355
+ 2
2356
+ 2
2357
+ 2
2358
+ 1
2359
+ 1
2360
+ 1
2361
+ 1
2362
+ 1
2363
+ 1
2364
+ 1
2365
+ 1
2366
+ 2
2367
+ 2
2368
+ 2
2369
+ 2
2370
+ 1
2371
+ 1
2372
+ 1
2373
+ 1
2374
+ B.2
2375
+ STICKER
2376
+ McAleer et al. (2019) had the interesting idea for the 3x3x3 cube that 20 stickers (cubie
2377
+ faces) are enough. To characterize the 3x3x3 cube, we need according to McAleer et al.
2378
+ (2019) only one (not 2 or 3) sticker for every of the 20 cubies, as shown in Fig. 10. This
2379
+ is because the location of one sticker uniquely defines the location and orientation of that
2380
+ cubie. We name this representation STICKER in GBG.
2381
+ We track the 4 top corner stickers 0,2,4,6 plus the 4 bottom corner stickers 24,26,28,30
2382
+ plus one sticker for each of the 12 edge stickes as shown in Fig. 10, in total 20 stickers and
2383
+ ignore the 28 other stickers.
2384
+ How to lay out this representation as a board? – McAleer et al. (2019) create a rect-
2385
+ angular one-hot-encoding board with 20 × 24 = 480 cells (20 rows for the stickers and
2386
+ 24 columns for the locations27) carrying only 0’s and 1’s. This is fine for the approach of
2387
+ McAleer et al. (2019), where they use this board as input for a DNN, but not so nice for
2388
+ n-tuples. Without constraints, such a board amounts to 2480 ≈ 10145 combinations, which
2389
+ is unpleasantly large (much larger than in CUBESTATE).28
2390
+ Another possibility to lay out the board: Specify 20 board cells (the stickers) with 24
2391
+ position values each. This amounts to 2420 = 4.0 · 1027 combinations.
2392
+ B.3
2393
+ STICKER2
2394
+ Analogously to Sec. 4.3, we represent the 24 corner locations and 24 edge locations as:
2395
+ corner location = (corner cubie, face ID),
2396
+ edge location = (edge cubie, face ID).
2397
+ That is, each corner location is represented by a corner cubie a,b,c,d,e,f,g,h and by a face
2398
+ ID 1,2,3. Table 7 shows the explicit numbering in this new representation. Additionally,
2399
+ each edge location is represented by an edge cubie A,B,C,D,E,F,G,H,I,J,K,L29 and by a
2400
+ face ID 1,2. Convention for face ID numbering of edge cubies: For top- and bottom-layer
2401
+ edge cubies, it is 1 for U and D stickers, 2 else. The face ID for middle-layer edge cubies is
2402
+ 1 for F and B stickers, 2 else. Table 10 shows the explicit numbering in this representation.
2403
+ The corresponding board consists of 8 + 8 + 12 +12 = 40 cells shown in Table 11.
2404
+ The 8 cell pairs in the first two rows code the locations of the tracked corner stickers
2405
+ 278 · 3 for the corner stickers and 12 · 2 for the edge stickers
2406
+ 28McAleer et al. (2019) do not need a weight for every of the 2480 possible states, as the n-tuple network
2407
+ would need. Instead they need only 480 · 4096 = 2 · 106 weights to the first hidden layer having 4096 neurons.
2408
+ 294 U-stickers, 4 D-sticker, 4 middle-layer stickers (2F, 2B)
2409
+ 39
2410
+
2411
+ 0,2,4,6,24,26,28,30, see Table 7 in Sec. 4.3. The 12 cell pairs in the last two rows code the
2412
+ location of the tracked edge stickers 1,3,5,7,17,21,43,47,25,27,29,31, see Table 10. This
2413
+ n-tuple coding requires tuple cells with varying number of position values and leads to
2414
+ 88 · 38 · 1212 · 212 = 4.0 · 1027
2415
+ combinations in representation STICKER2.30
2416
+ Table 11: STICKER2 board representation for the default 3x3x3 cube. For the BoardVector,
2417
+ cells are numbered row-by-row from 0 to 39.
2418
+ corner
2419
+ a
2420
+ b
2421
+ c
2422
+ d
2423
+ e
2424
+ f
2425
+ g
2426
+ h
2427
+ 8 positions
2428
+ face ID
2429
+ 1
2430
+ 1
2431
+ 1
2432
+ 1
2433
+ 1
2434
+ 1
2435
+ 1
2436
+ 1
2437
+ 3 positions
2438
+ edge
2439
+ A
2440
+ B
2441
+ C
2442
+ D
2443
+ E
2444
+ F
2445
+ G
2446
+ H
2447
+ I
2448
+ J
2449
+ K
2450
+ L
2451
+ 12 positions
2452
+ face ID
2453
+ 1
2454
+ 1
2455
+ 1
2456
+ 1
2457
+ 1
2458
+ 1
2459
+ 1
2460
+ 1
2461
+ 1
2462
+ 1
2463
+ 1
2464
+ 1
2465
+ 2 positions
2466
+ B.4
2467
+ Adjacency Sets
2468
+ To create n-tuples by random walk, we need to define adjacency sets (sets of neighbors)
2469
+ for every board cell k.
2470
+ For CUBESTATE, the board is the flattened representation of the 3x3x3 cube (Fig. 4).
2471
+ The adjacency set is defined as the 4-point neighborhood, where two stickers are neigh-
2472
+ bors if they are neighbors (share a common edge) on the cube.
2473
+ For STICKER2, the board consists of 40 cells shown in Table 11. Since it matters for
2474
+ the corner stickers mostly where the other corner stickers are and for the edge stickers
2475
+ mostly where the other edge stickers are, it is reasonable to form two adjacency subsets
2476
+ S1 = {00, . . . , 15} and S2 = {16, . . . , 39} and to define the adjacency set
2477
+ Adj(k) = Si \ {k}
2478
+ for each k ∈ Si, i = 1, 2.
2479
+ C
2480
+ Hyperparameters
2481
+ In this appendix we list all parameter settings for the GBG agents used in this paper. Pa-
2482
+ rameters were manually tuned with two goals in mind: (a) to reach high-quality results and
2483
+ (b) to reach stable (robust) performance when conducting multiple training runs with differ-
2484
+ ent random seeds. The agents listed further down are the best-so-far agents found (best
2485
+ among all agents that learn from scratch by self-play).
2486
+ The detailed meaning of RL parameters is explained in Konen and Bagheri (2021):
2487
+ 30This is, by the way, identical to (8·3)8 ·(12·2)12 = 24(8+12) = 2420 = 4.0·1027, the same number we had
2488
+ above in the second mode of STICKER. But STICKER2 has the advantage that the combinations are spread
2489
+ over more board cells (40) than in STICKER (20). By having more board cells with fewer position values, the
2490
+ n-tuples can better represent the relationships between cube states.
2491
+ 40
2492
+
2493
+ • Algorithms 2, 5 and 7 in Konen and Bagheri (2021) explain parameters α (learning
2494
+ rate), γ (discount factor), ϵ (exploration rate) and output sigmoid σ (either identity or
2495
+ tanh).
2496
+ • Appendix A.3 explains our eligibility method, parameters are: eligibility trace factor λ,
2497
+ horizon cut ch, eligibility trace type ET (normal) or RESET (reset on random move).
2498
+ If not otherwise stated, we use in this paper λ = 0 (no eligibility traces). For λ = 0,
2499
+ horizon cut ch and eligibity trace type are irrelevant. If λ > 0, their defaults ch = 0.1
2500
+ and trace type ET apply.
2501
+ • Appendix A.5 explains our TCL method (also summarized in Sec. 5.2.1). Parameters
2502
+ of TCL are: TC-Init (initialization constant for counters), TC transfer function (TC-id
2503
+ or TC-EXP), β (exponential factor in case of TC-EXP), TC accumulation type (delta
2504
+ or recommended weight-change).
2505
+ Another branch of our algorithm is the MCTS wrapper, which can be used to wrap
2506
+ TD-N-tuple agents during evaluation and testing. MCTS wrapping is briefly explained in
2507
+ Sec. 5.2.2. The precise algorithm for MCTS wrapping is explained in detail in (Scheiermann
2508
+ and Konen, 2022, Sec. II-B).31 Parameters of MCTS are:
2509
+ • cPUCT : relative weight for the prior probabilities of the wrapped agent in relation to
2510
+ the value that the wrapper estimates
2511
+ • dmax: maximum depth of the MCTS tree, if -1: no maximum depth
2512
+ • UseSoftMax: boolean, whether to use SoftMax normalization for the priors or not
2513
+ • UseLastMCTS: boolean, whether to re-use the MCTS from the previous move within
2514
+ an episode or not
2515
+ Further parameter explanations:
2516
+ • Sec. 4 in this document explains n-tuples, parameters are: number of n-tuples, length
2517
+ of n-tuples, and n-tuple creation mode (fixed, random walk, random points).
2518
+ • Sec. 2.5 in this document explains symmetries. If parameter nSym = 0, do not use
2519
+ symmetries. If nSym > 0, use this number nSym of symmetries. In the Rubik’s cube
2520
+ case, nSym is a number between 0 and 24.
2521
+ • LearnFromRM: whether to learn from random moves or not. (Does not apply here,
2522
+ because we use in Rubiks’s cube always ϵ = 0, i.e. we have no random moves.)
2523
+ • ChooseStart-01: whether to start episodes from different 1-ply start states or always
2524
+ from the default start state. (Does not apply here, because we start in Rubik’s cube
2525
+ never from the default cube, but always from the p-twisted cube.)
2526
+ 31As (Scheiermann and Konen, 2022, Sec. IV-E) shows, the MCTS wrapper may be used as well during
2527
+ training, but due to large computation times needed for this, we do not follow that route in this paper.
2528
+ 41
2529
+
2530
+ • Etrain: maximum episode length during training, if -1: no maximum length.
2531
+ • Eeval: maximum episode length during evaluation and play, if -1: no maximum length.
2532
+ All agents were trained with no MCTS wrapper inside the training loop. The hyper-
2533
+ parameters of the agent for each cube variant were found by manual fine-tuning. See
2534
+ also (Konen, 2022).
2535
+ In the following, we list the precise settings for all agents used in this paper. If not stated
2536
+ otherwise, these common settings apply to all agents: sigmoid σ = id, LearnFromRM =
2537
+ false, ChooseStart-01 = false. Wrapper settings during test and evaluation: MCTS wrapper
2538
+ with cPUCT = 1.0, dmax = 50, UseSoftMax = true, UseLastMCTS = true.
2539
+ Common parameters of Algorithm 2 in Sec. 5.2 are: cost-to-go c = −0.1 and positive
2540
+ reward Rpos = 1.0.
2541
+ The parameters for training without symmetries (nSym = 0) in Sec. 6.2 are:
2542
+ • 2x2x2 cube, HTM: α = 0.25, γ = 1.0, ϵ = 0.0, λ = 0.0, no output sigmoid. N-tuples:
2543
+ 60 7-tuples created by random walk.
2544
+ TCL activated with transfer function TC-id,
2545
+ TC-Init= 10−4 and rec-weight-change accumulation. 3,000,000 training episodes.
2546
+ pmax = 13, Etrain = 16, Eeval = 50.
2547
+ Agent filename in GBG: 2x2x2_STICKER2_AT/TCL4-p13-ET16-3000k-60-7t-stub.agt.zip
2548
+ • 2x2x2 cube, QTM: same as 2x2x2 cube, HTM, but with pmax = 16, Etrain = 20.
2549
+ Agent filename in GBG: 2x2x2_STICKER2_QT/TCL4-p16-ET20-3000k-60-7t-stub.agt.zip
2550
+ • 3x3x3 cube, HTM: same as 2x2x2 cube, HTM, but with 120 7-tuples created by
2551
+ random walk, pmax = 9, Etrain = 13.
2552
+ Agent filename in GBG: 3x3x3_STICKER2_AT/TCL4-p9-ET13-3000k-120-7t-stub.agt.zip
2553
+ • 3x3x3 cube, QTM: same as 3x3x3 cube, HTM, but with pmax = 13, Etrain = 16.
2554
+ Agent filename in GBG: 3x3x3_STICKER2_QT/TCL4-p13-ET16-3000k-120-7t-stub.agt.zip
2555
+ The agent files given in the list above are just stubs, i.e. agents that are initialized with
2556
+ the correct parameters but not yet trained. This is because a trained agent can require up
2557
+ to 80 MB disk space, which is too much for GitHub. Instead, a user of GBG may load such
2558
+ a stub agent, train it (takes between 10-40 minutes) and save it to local disk.
2559
+ When evaluating in Sec. 6.2 the trained agents with different MCTS wrappers, we test
2560
+ in each case whether cPUCT = 1.0 or 10 is better. In most cases, cPUCT = 1.0 is better,
2561
+ but for (2x2x2, QTM, 800 iterations) and for (3x3x3, HTM, 100 iterations) cPUCT = 10.0 is
2562
+ the better choice.
2563
+ The parameters for training with symmetries (nSym = 8, 16, 24) in Sec. 6.4 are:
2564
+ • 3x3x3 cube, QTM: same as 3x3x3 cube, QTM in Sec. 6.2, but with nsym = 8, 16, 24.
2565
+ Agent filename in GBG: 3x3x3_STICKER2_QT/TCL4-p13-ET16-3000k-120-7t-nsym08-stub.agt.zip,
2566
+ 3x3x3_STICKER2_QT/TCL4-p13-ET16-3000k-120-7t-nsym16-stub.agt.zip,
2567
+ 3x3x3_STICKER2_QT/TCL4-p13-ET16-3000k-120-7t-nsym24-stub.agt.zip.
2568
+ 42
2569
+
2570
+ Again, the agent filenames are just stubs, i.e. agents that are initialized with the correct
2571
+ parameters but not yet trained. As above, a user of GBG may load such a stub agent, train
2572
+ it (which takes in the symmetry case between 5.4h and 13h, see Table 9) and save it to
2573
+ local disk.
2574
+ For further details and experiment shell scripts, see also the associated Papers-with-
2575
+ Code repository https://github.com/WolfgangKonen/PapersWithCodeRubiks.
2576
+ 43
2577
+
AtFLT4oBgHgl3EQfxTCi/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
C9E1T4oBgHgl3EQfEANP/content/2301.02884v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e053822fc5562983f6046e8449603f132a61d69287e9c08733130c3ed4fbfbc3
3
+ size 768320
C9E1T4oBgHgl3EQfEANP/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a2a9417f34f680a36eeeeaa7a2272c4ad135d4be535af6a25fd366e1eeb44ea
3
+ size 3670061
C9E1T4oBgHgl3EQfEANP/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:398e0920b2a9aa05d1cbd4e1f67dbfcef415c5eaf5b8fd3084b9c087c680bec9
3
+ size 133254
CdE1T4oBgHgl3EQfpwUV/content/2301.03334v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0fd1533cbc6d0fbc1aec933f856027d34efdd78264eea36f8fef0d39c3d2c73
3
+ size 1007547
CdE1T4oBgHgl3EQfpwUV/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a49ec1959fe535ae52012573b6f4fbbb4adf4e6514b5c571888927b804b11e3
3
+ size 70607
CdE5T4oBgHgl3EQfTw8s/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff9ffdd25f41ad6531706015646229b79275d326674aa31e702917668be2d2c
3
+ size 4063277
CdE5T4oBgHgl3EQfTw8s/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a16fd6645cd5472553e86758a0270e79460ae86febc970e431f5cb7bce3b9d12
3
+ size 162955
DdE4T4oBgHgl3EQf6A7h/content/tmp_files/2301.05329v1.pdf.txt ADDED
@@ -0,0 +1,1942 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF
2
+ L-FUNCTIONS
3
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
4
+ Abstract. Let E be an elliptic curve over Q.
5
+ We conjecture
6
+ asymptotic estimates for the number of vanishings of L(E, 1, χ) as
7
+ χ varies over all primitive Dirichlet characters of orders 4 and 6.
8
+ Our conjectures about these families come from conjectures about
9
+ random unitary matrices as predicted by the philosophy of Katz-
10
+ Sarnak. We support our conjectures with numerical evidence.
11
+ Earlier work by David, Fearnley and Kisilevsky formulates anal-
12
+ ogous conjectures for characters of any odd prime order. In the
13
+ composite order case, however, we need to justify our use of random
14
+ matrix theory heuristics by analyzing the equidistribution of the
15
+ squares of normalized Gauss sums. Along the way we introduce the
16
+ notion of totally order ℓ characters to quantify how quickly quartic
17
+ and sextic Gauss sums become equidistributed. Surprisingly, the
18
+ rate of equidistribution in the full family of quartic (sextic, resp.)
19
+ characters is much slower than in the sub-family of totally quar-
20
+ tic (sextic, resp.) characters. A conceptual explanation for this
21
+ phenomenon is that the full family of order ℓ twisted elliptic curve
22
+ L-functions, with ℓ even and composite, is a mixed family with
23
+ both unitary and orthogonal aspects.
24
+ 1. Introduction
25
+ Vanishings of elliptic curve L-functions at the value s = 1 (normalized
26
+ so that the functional equation relates s and 2−s) is central to a great
27
+ deal of modern number theory. For instance, if an L-function associated
28
+ to an elliptic curve vanishes at s = 1, then the BSD conjecture predicts
29
+ that the curve will have infinitely many rational points.
30
+ Additionally, statistical questions about how often L-functions within
31
+ a family vanish at the central value have also been of broad interest.
32
+ For example, it is expected (as first conjectured by Chowla [Cho87])
33
+ that, for all primitive Dirichlet characters χ, we have L(χ, 1/2) ̸= 0.
34
+ A fruitful way of studying such questions has been to model L-functions
35
+ using random matrices.
36
+ For example, in [CKRS00] Conrey, Keat-
37
+ ing, Rubinstein and Snaith consider the family of twisted L-functions
38
+ 1
39
+ arXiv:2301.05329v1 [math.NT] 12 Jan 2023
40
+
41
+ 2
42
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
43
+ L(f, s, χd) associated to a modular form f of weight k and quadratic
44
+ characters χd. They show that the random matrix theory model pre-
45
+ dicts that infinitely many values L(f, s, χd) are zero when the weight
46
+ of f is 2 or 4, but that only finitely many of the values are zero when
47
+ the weight is at least 6.
48
+ Another example, due to David, Fearnley and Kisilevsky [DFK04,
49
+ DFK07], instead uses the random matrix model to give conjectural
50
+ asymptotics for the number of vanishings of elliptic curve L-functions
51
+ twisted by families of Dirichlet characters of a fixed order. In particu-
52
+ lar, they predict that for an elliptic curve E, the values L(E, 1, χ) are
53
+ zero infinitely often if χ has order 3 or 5, but for characters χ with a
54
+ fixed prime order ℓ ≥ 7, only finitely many values L(E, 1, χ) are zero.
55
+ In recent work, inspired by the conjectures of [DFK04, DFK07], Mazur
56
+ and Rubin [MR21] use statistical properties of modular symbols to
57
+ heuristically estimate the probability that L(E, 1, χ) vanishes. Their
58
+ Conjecture 11.1 implies that for an elliptic curve E over Q, there
59
+ should be only finitely many characters χ of a fixed order ℓ such that
60
+ L(E, 1, χ) = 0 and ϕ(ℓ) > 4. This further implies the following: Let E
61
+ be an elliptic curve over Q and let be F/Q an infinite abelian exten-
62
+ sion such that Gal(F/Q) has only finitely many characters of orders 2,
63
+ 3 and 5. Then E(F) is finitely generated. Finally, for an elliptic curve
64
+ E defined over Q, their Proposition 3.2 relates the (order of) vanishing
65
+ of L(E, 1, χ) to the growth in rank of E over a finite abelian extension
66
+ F/Q. In particular, if BSD holds for E over both Q and F, then
67
+ rank(E(F)) = rank(E(Q)) +
68
+
69
+ χ:Gal(F/Q)→C×
70
+ ords=1L(E, s, χ).
71
+ 1.1. Notation and statement of the Main Conjecture. We fix
72
+ the following notation. See Definition 3.1 for the definition of totally
73
+ order ℓ characters but, roughly speaking, these are order ℓ characters
74
+ that, when factored, have all their factors also of order ℓ. Set
75
+ Ψℓ = {primitive Dirichlet characters χ of order ℓ}
76
+ Ψtot
77
+
78
+ = {χ ∈ Ψℓ that are totally order ℓ}
79
+ Ψ′
80
+ ℓ = {χ ∈ Ψℓ with cond(χ) prime}.
81
+ Note that Ψ′
82
+ ℓ ⊆ Ψtot
83
+
84
+ ⊆ Ψℓ.
85
+
86
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
87
+ 3
88
+ Along the way we will need to estimate the number of characters in
89
+ each family and so we define:
90
+ Ψℓ(X) = {χ ∈ Ψℓ : cond(χ) ≤ X}
91
+ Ψtot
92
+ ℓ (X) = {χ ∈ Ψtot
93
+
94
+ : cond(χ) ≤ X}
95
+ Ψ′
96
+ ℓ(X) = {χ ∈ Ψ′
97
+ ℓ : cond(χ) ≤ X}.
98
+ For an elliptic curve E over Q we also define:
99
+ FΨℓ,E = {L(E, s, χ) : χ ∈ Ψℓ}
100
+ FΨℓ,E(X) = {L(E, s, χ) ∈ FΨℓ,E : χ ∈ Ψℓ(X)}.
101
+ We also define FΨtot
102
+
103
+ ,E and FΨtot
104
+
105
+ ,E(X) analogously for Ψtot
106
+
107
+ in place of
108
+ Ψℓ; we do the same with Ψ′
109
+ ℓ, as well. Finally, let
110
+ VΨℓ,E(X) = {L(E, s, χ) ∈ FΨℓ,E(X) : L(E, 1, χ) = 0}
111
+ VΨtot
112
+
113
+ ,E(X) = {L(E, s, χ) ∈ FΨtot
114
+
115
+ ,E(X) : L(E, 1, χ) = 0}
116
+ VΨ′
117
+ ℓ,E(X) = {L(E, s, χ) ∈ FΨ′
118
+ ℓ,E(X) : L(E, 1, χ) = 0}.
119
+ With this notation, we make the following conjecture.
120
+ Conjecture 1.1. Let E be an elliptic curve. Then, there exist con-
121
+ stants bE,4 and bE,6 so that
122
+ |VΨ4,E(X)| ∼ bE,4X1/2 log5/4 X
123
+ and
124
+ |VΨ6,E(X)| ∼ bE,6X1/2 log9/4 X
125
+ as X → ∞.
126
+ Moreover, if we restrict only to those twists by totally quartic or totally
127
+ sextic characters, then there exist constants btot
128
+ E,4 and btot
129
+ E,6 such that
130
+ |VΨtot
131
+ 4 ,E(X)| ∼ btot
132
+ E,4X1/2 log1/4 X
133
+ and
134
+ |VΨtot
135
+ 6 ,E(X)| ∼ btot
136
+ E,6X1/2 log1/4 X
137
+ as X → ∞.
138
+ Finally, if we restrict only to those twists by characters of prime con-
139
+ ductor, then there exist constants b′
140
+ E,4 and b′
141
+ E,6 such that
142
+ |VΨ′
143
+ 4,E(X)| ∼ b′
144
+ E,4X1/2 log−3/4 X
145
+ and
146
+ |VΨ′
147
+ 6,E(X)| ∼ b′
148
+ E,6X1/2 log−3/4 X
149
+ as X → ∞.
150
+
151
+ 4
152
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
153
+ In particular, we conjecture that families of elliptic curve L-functions
154
+ twisted by quartic and sextic characters vanish infinitely often at the
155
+ central value.
156
+ To assist the reader in comparing the powers of log X in the above
157
+ asymptotics, we point out here that for ℓ = 4, |Ψ4(X)| is roughly log X
158
+ times as large as |Ψtot
159
+ 4 (X)|, which in turn is roughly log X times as
160
+ large as |Ψ′
161
+ 4(X)|. For ℓ = 6, then |Ψ6(X)|/|Ψtot
162
+ 6 (X)| ≍ (log X)2, and
163
+ |Ψtot
164
+ 6 (X)|/|Ψ′
165
+ 6(X)| ≍ log X. Hence, in each of the three families with
166
+ a given value of ℓ, the proportion of vanishing twists has the same
167
+ order of magnitude. See Proposition 3.6, Lemma 3.7, Proposition 3.8,
168
+ and Lemma 3.9 below for asymptotics of the underlying families of
169
+ characters.
170
+ 1.2. Outline of the paper. There are two main ingredients needed
171
+ to be able to apply random matrix theory predictions to our families of
172
+ twists. The first is a discretization for the central values. As described
173
+ in Section 2.1 this can be done for curves E satisfying certain technical
174
+ conditions as described in [WW20]. We need this discretization in order
175
+ to approximate the probability that L(E, 1, χ) vanishes.
176
+ The second ingredient is a proper identification of the symmetry type
177
+ of the family, which is largely governed by the distribution of the sign of
178
+ the functional equation within the family (see Section 4 of [CFK+05]).
179
+ This directly leads to an investigation around the equidistribution of
180
+ squares of Gauss sums of quartic and sextic characters, which has con-
181
+ nections to the theory of metaplectic automorphic forms [Pat87].
182
+ See Section 3.1 for a thorough discussion.
183
+ It is a subtle feature that the families of twists of elliptic curve L-
184
+ functions by the characters in Ψtot
185
+
186
+ and Ψ′
187
+ ℓ have unitary symmetry
188
+ type, but for composite even values of ℓ, the twists by Ψℓ should be
189
+ viewed as a mixed family. To elaborate on this point, consider the case
190
+ that ℓ = 4, and first note that a character χ ∈ Ψ4 factors uniquely
191
+ as a totally quartic character times a quadratic character of relatively
192
+ prime conductors. The totally quartic family has a unitary symmetry,
193
+ but the family of twists of an elliptic curve by quadratic characters has
194
+ orthogonal symmetry. This tension between the totally quartic aspect
195
+ and the quadratic aspect is what leads to the mixed symmetry type.
196
+ The situation is analogous to the family L(E, 1 + it, χd); if t = 0 and
197
+ d varies then one has an orthogonal family, while if d is fixed and t
198
+ varies, then one has a unitary family. See [SY10] for more discussion
199
+ on this family.
200
+
201
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
202
+ 5
203
+ Another interesting feature of these families is that Ψℓ(X) is larger
204
+ than Ψtot
205
+ ℓ (X) by a logarithmic factor. For instance, when ℓ = 4, then
206
+ Ψtot
207
+ 4 (X) grows linearly in X (see Proposition 3.6 below), and of course
208
+ Ψ2(X) also grows linearly in X. Similarly to how the average size of the
209
+ divisor function is log X, this indicates that |Ψ4(X)| grows like X log X
210
+ (see Lemma 3.7 below).
211
+ The rest of the paper is organized as follows.
212
+ In the next section
213
+ we give the necessary background and notation for L-functions and
214
+ their central values and discuss the discretization we use in the paper.
215
+ In the subsequent section we estimate some sums involving quartic
216
+ and sextic characters and discuss totally quartic and sextic characters
217
+ in more detail. In the final section, we motivate the asymptotics in
218
+ Conjecture 1.1 and provide numerical evidence that supports them.
219
+ Acknowledgments. We thank David Farmer and Brian Conrey for
220
+ helpful conversations. This research was done using services provided
221
+ by the OSG Consortium [PPK+07, SBH+09], which is supported by the
222
+ National Science Foundation awards #2030508 and #1836650. This
223
+ material is based upon work supported by the National Science Foun-
224
+ dation under agreement No. DMS-2001306 (M.Y.). Any opinions, find-
225
+ ings and conclusions or recommendations expressed in this material are
226
+ those of the authors and do not necessarily reflect the views of the Na-
227
+ tional Science Foundation.
228
+ 2. L-functions and central values
229
+ Let E be an elliptic curve defined over Q of conductor NE. The L-
230
+ function of E is given by the Euler product
231
+ L(E, s) =
232
+
233
+ p∤NE
234
+
235
+ 1 − ap
236
+ ps +
237
+ 1
238
+ p2s−1
239
+ �−1 �
240
+ p|NE
241
+
242
+ 1 − ap
243
+ ps
244
+ �−1
245
+ =
246
+
247
+ n≥1
248
+ an
249
+ ns .
250
+ The modularity theorem [BCDT01, TW95, Wil95] implies that L(E, s)
251
+ has an analytic continuation to all of C and satisfies the functional
252
+ equation
253
+ Λ(E, s) =
254
+ � √NE
255
+
256
+ �s
257
+ Γ(s)L(E, s) = wEΛ(E, 2 − s)
258
+ where the sign of the functional equation is wE = ±1 and is the eigen-
259
+ value of the Fricke involution. Let χ be a primitive character and let
260
+ cond(χ) be its conductor and suppose that cond(χ) is coprime to the
261
+
262
+ 6
263
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
264
+ conductor NE of the curve. The twisted L-function has Dirichlet series
265
+ L(E, s, χ) =
266
+
267
+ n≥1
268
+ anχ(n)
269
+ ns
270
+ and the functional equation (cf. [IK04, Prop. 14.20])
271
+ Λ(E, s, χ) =
272
+
273
+ cond(χ)√NE
274
+
275
+ �s
276
+ Γ(s)L(E, s, χ)
277
+ = wEχ(NE)τ(χ)2
278
+ cond(χ)
279
+ Λ(E, 2 − s, χ),
280
+ (2.1)
281
+ where τ(χ) = �
282
+ r∈Z/mZ χ(r)e2πir/m is the Gauss sum and m = cond(χ).
283
+ 2.1. Discretization. To justify our Conjecture 1.1, we need a condi-
284
+ tion that allows us to deduce that L(E, 1, χ) = 0, for a given E and
285
+ χ of order ℓ. In particular, we show that L(E, 1, χ) is discretized (see
286
+ Lemma 4.2) and so there exists a constant cE,ℓ such that |L(E, 1, χ)| <
287
+ cE,ℓ/
288
+
289
+ cond(χ) implies L(E, 1, χ) = 0. In this section we prove the
290
+ results necessary for the discretization.
291
+ Let E be an elliptic curve over Q with conductor NE.
292
+ Let χ be a
293
+ nontrivial primitive Dirichlet character of conductor m and order ℓ.
294
+ Set ϵ = {±1} = χ(−1) depending on whether χ is an even or odd
295
+ character. Let Ω+(E) and Ω−(E) denote the real and imaginary periods
296
+ of E, respectively, with Ω+(E) > 0 and Ω−(E) ∈ iR>0.
297
+ The algebraic L-value is defined by
298
+ (2.2)
299
+ Lalg(E, 1, χ) := L(E, 1, χ) · m
300
+ τ(χ)Ωϵ(E)
301
+ = ϵ · L(E, 1, χ)τ(χ)
302
+ Ωϵ(E)
303
+ While it has been known for some time that algebraic L-values are
304
+ algebraic numbers, recent work of Weirsema and Wuthrich [WW20]
305
+ characterizes conditions on E and χ which guarantee integrality. In
306
+ particular, under the assumption that the Manin constant c0(E) = 1,
307
+ if the conductor m is not divisible by any prime of additive reduction for
308
+ E, then Lalg(E, 1, χ) ∈ Z[ζℓ] is an algebraic integer [WW20, Theorem
309
+ 2]. For a given curve E, we will avoid the finitely many characters χ
310
+ for which Lalg(E, 1, χ) fails to be integral.
311
+ Proposition 2.1. Let χ be a primitive Dirichlet character of odd order
312
+ ℓ and conductor m. Then
313
+ Lalg(E, 1, χ) =
314
+
315
+ χ(NE)(ℓ+1)/2 nE(χ),
316
+ if wE = 1,
317
+ (ζℓ − ζ−1
318
+ ℓ )−1 χ(NE)(ℓ+1)/2 nE(χ)
319
+ if wE = −1,
320
+ for some algebraic integer nE(χ) ∈ Z[ζℓ + ζ−1
321
+ ℓ ] = Z[ζℓ] ∩ R.
322
+
323
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
324
+ 7
325
+ Proposition 2.2. Let χ be a primitive Dirichlet character of even
326
+ order ℓ and conductor m. Then Lalg(E, 1, χ) = kE nE(χ) where nE(χ)
327
+ is some algebraic integer in Z[ζℓ +ζ−1
328
+ ℓ ] = Z[ζℓ]∩R and kE is a constant
329
+ depending only on the curve E. In particular, when wE = 1 we have
330
+ kE =
331
+
332
+
333
+
334
+
335
+
336
+ (1 + χ(NE))
337
+ if χ(NE) ̸= −1
338
+ ζℓ/4
339
+
340
+ ,
341
+ if 4 | ℓ and χ(NE) = −1
342
+ (ζℓ − ζ−1
343
+ ℓ )
344
+ if 4 ∤ ℓ and χ(NE) = −1.
345
+ Proof of Prop 2.1 and Prop 2.2. Since E is defined over Q, we have
346
+ L(E, 1, χ) = L(E, 1, χ). Using the functional equation, we obtain
347
+ Lalg(E, 1, χ) = ϵ · L(E, 1, χ)τ(χ)
348
+ Ωϵ(E)
349
+ = ϵ · wE χ(NE) τ(χ)τ(χ)2
350
+ m · Ωϵ(E)
351
+ L(E, 1, χ)
352
+ = wE χ(NE) τ(χ)
353
+ Ωϵ(E)
354
+ L(E, 1, χ)
355
+ = wEχ(NE) ϵ · τ(χ)L(E, 1, χ)
356
+ Ωϵ(E)
357
+ = wEχ(NE) Lalg(E, 1, χ)
358
+ Thus Lalg(E, 1, χ) is a solution to the equation z = wEχ(NE)z. Note
359
+ that if z1, z2 ∈ Z[ζℓ] are two distinct solutions to this equation, then
360
+ z1/z1 = z2/z2 so that z1/z2 = z1/z2 = (z1/z2), hence z1/z2 ∈ R. Thus
361
+ Lalg(E, 1, χ) = αz with α ∈ Z[ζℓ] ∩ R = Z[ζℓ + ζ−1
362
+ ℓ ] and z ∈ Z[ζℓ].
363
+ Suppose that wE = 1. When ℓ is odd, we can take z = χ(NE)
364
+ ℓ+1
365
+ 2 . Now
366
+ suppose that ℓ is even. If χ(NE) ̸= −1, since χ(NE) = ζr
367
+ ℓ for some
368
+ 1 ≤ r ≤ ℓ, we may take z = (1+χ(NE)). Indeed, we have wEχ(NE)z =
369
+ ζr
370
+ ℓ (1 + ζℓ−r
371
+
372
+ ) = ζr
373
+ ℓ + 1 = z. If 4 | ℓ and χ(NE) = −1 = ζℓ/2
374
+
375
+ , we take
376
+ z = ζℓ/4
377
+
378
+ . Finally, if 4 ∤ ℓ and χ(NE) = −1 take z = ζℓ−ζ−1
379
+
380
+ = 2i Im(ζℓ).
381
+ When wE = −1 and ℓ is odd, we may take z = (ζℓ − ζ−1
382
+ ℓ )−1χ(NE)
383
+ ℓ+1
384
+ 2 .
385
+ When ℓ is even, if χ(NE) = −1 then we may take z = ζℓ + ζ−1
386
+
387
+ =
388
+ 2 Re(ζℓ), and if χ(NE) ̸= −1 then we make take z = 1 − χ(NE).
389
+
390
+ Remark 2.3. We note that for ℓ even, |kE| ≤ 2.
391
+ It is clear that
392
+ |ζℓ/4
393
+
394
+ | = 1 and |2i Im(ζℓ)| ≤ 2.
395
+ Observe |(1 + χ(NE)| ≤ 2, by the
396
+ triangle inequality.
397
+
398
+ 8
399
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
400
+ Note that since L(E, 1, χ) vanishes if and only if nE(χ) does, we may
401
+ interpret the integers nE(χ) as a discretization of the special values
402
+ L(E, 1, χ). This is similar to the case of cubic characters considered in
403
+ [DFK04] since Q(ζ3)+ = Q, as opposed to characters of prime order ℓ ≥
404
+ 5 where further steps were needed to find an appropriate discretization
405
+ [DFK07].
406
+ 3. Estimates for Dirichlet characters
407
+ In this section we discuss various aspects of Dirichlet characters of
408
+ order 4 and 6. A necessary condition for a family of L-functions to
409
+ be modeled by the family of unitary matrices is that the signs must
410
+ be uniformly distributed on the unit circle. From (2.1), L(E, s, χ) has
411
+ sign wEχ(NE) τ(χ)2
412
+ cond(χ); we will largely focus on the distribution of the
413
+ square of the Gauss sums, viewing the extra factor χ(NE) as a minor
414
+ perturbation. To obtain our estimates for the number of vanishings
415
+ |VΨℓ,E(X)| (respectively, |VΨ′
416
+ ℓ,E(X)| and |VΨtot
417
+
418
+ ,E(X)|) we must estimate
419
+ the size of Ψℓ(X) (respectively, Ψ′
420
+ ℓ(X) and Ψtot
421
+ ℓ (X)) as well as the size
422
+ of an associated sum. We also discuss the family of totally quartic
423
+ and sextic characters to explain some phenomena we observed in our
424
+ computations.
425
+ 3.1. Distributions of Gauss sums. Patterson [Pat87], building on
426
+ work of Heath-Brown and Patterson [HBP79] on the cubic case, showed
427
+ that the normalized Gauss sum τ(χ)/
428
+
429
+ cond(χ) is uniformly distributed
430
+ on the circle for χ varying in each of Ψtot
431
+
432
+ and Ψ′
433
+ ℓ. This result was
434
+ first announced in [PHH81]; see [BE81] for an excellent summary of
435
+ this and other work related to the distributions of Gauss sums. Patter-
436
+ son’s method moreover shows that the argument of τ(χ)χ(k) is equidis-
437
+ tributed for any fixed nonzero integer k, and hence so is the argument
438
+ of τ(χ)2χ(k).
439
+ For the case of quartic and sextic characters with arbitrary conductors,
440
+ there do not appear to be any results in the literature that imply their
441
+ Gauss sums are uniformly distributed. In Figure 1 we see the distri-
442
+ butions of Gauss sums of characters of orders 3 through 9 of arbitrary
443
+ conductor up to 200000. We included characters of order 4 and 6 since
444
+ those examples are the focus of the paper; we included characters of or-
445
+ ders 3, 5, and 7 as consistency checks (in [DFK04, DFK07] the authors
446
+ rely on them being uniformly distributed); and we included composite
447
+ orders 8 and 9 to see if something similar happens in those cases as
448
+ happens in the quartic case. In all cases but the quartic case, we see
449
+ that the distributions of the angles of the signs appear to be uniformly
450
+
451
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
452
+ 9
453
+ Figure 1. Each histogram represents the distribution
454
+ of the argument of the τ(χ)2/cond(χ) for characters of
455
+ order 3 through 9, from top left to bottom right. Each
456
+ histogram is made by calculating the Gauss sums of char-
457
+ acters in Ψℓ of each conductor up to 200000.
458
+ distributed. The quartic distribution has two obvious peaks that we
459
+ discuss below, in Remark 3.17.
460
+ The images in Figure 1 suggest that the family of matrices that best
461
+ models the vanishing of L(E, 1, χ) is unitary in every case except possi-
462
+ bly the case of quartic characters. Nevertheless, in Section 3.4 we show
463
+ that the squares of the quartic Gauss sums are indeed equidistributed,
464
+ despite what the data suggest. Indeed, we prove that the squares of
465
+ the sextic and quartic Gauss sums are equidistributed, allowing us to
466
+ apply the heuristics from random matrix theory as in Section 4.
467
+ 3.2. Totally quartic and sextic characters. Much of the back-
468
+ ground material in this section can be found with proofs in [IR90,
469
+ Ch. 9].
470
+ Definition 3.1. Let χ be a primitive Dirichlet character of conductor
471
+ q and order ℓ. For prime p, let vp be the p-adic valuation, so that
472
+ q = �
473
+ p pvp(q). We correspondingly factor χ = �
474
+ p χ(p), where χ(p) has
475
+ conductor pvp(q). We say that χ is totally order ℓ if each χp is exact
476
+
477
+ 5+00
478
+ 5000
479
+ 4000
480
+ DORE
481
+ 2400
482
+ 1400
483
+ E-
484
+ -1
485
+ 0ADODE
486
+ 24000
487
+ E-
488
+ -2
489
+ -1
490
+ 0
491
+ 1DOS
492
+ 4400
493
+ 3000
494
+ DOZ
495
+ 1400
496
+ 2
497
+ -1
498
+ 0
499
+ 112000
500
+ 40000
501
+ E-
502
+ -2
503
+ -1
504
+ 0
505
+ 34400
506
+ DOSE
507
+ DODE
508
+ 2500
509
+ DOZ
510
+ 1500
511
+ 1400
512
+ 500
513
+ 0
514
+ E-
515
+ -1
516
+ i
517
+ 2
518
+ 350000
519
+ 40000
520
+ ADODE
521
+ DO
522
+ 4000
523
+ E-
524
+ -2
525
+ -1
526
+ 0
527
+ i217500
528
+ 15000
529
+ 12500
530
+ 14000
531
+ DOSr
532
+ 5000
533
+ 2500
534
+ E-
535
+ i210
536
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
537
+ order ℓ.
538
+ By convention we also consider the trivial character to be
539
+ totally order ℓ for every ℓ.
540
+ 3.2.1. Quartic characters. The construction of quartic characters uses
541
+ the arithmetic in Z[i]. The ring Z[i] has class number 1, unit group
542
+ {±1, ±i}, and discriminant −4. We say α ∈ Z[i] with (α, 2) = 1 is
543
+ primary if α ≡ 1 (mod (1+i)3). Any odd element in Z[i] has a unique
544
+ primary associate, which comes from the fact that the unit group in
545
+ the ring Z[i]/(1+i)3 may be identified with {±1, ±i}. An odd prime p
546
+ splits as p = ππ if and only if p ≡ 1 (mod 4). Given π with N(π) = p,
547
+ define the quartic residue symbol [ α
548
+ π] for α ∈ Z[i] with (α, π) = 1,
549
+ by [ α
550
+ π] ∈ {±1, ±i} and [ α
551
+ π] ≡ α
552
+ p−1
553
+ 4
554
+ (mod π). The map χπ(α) = [ α
555
+ π]
556
+ from (Z[i]/(π))× to {±1, ±i} is a character of order 4.
557
+ If α ∈ Z,
558
+ then [ α
559
+ π]2 ≡ α
560
+ p−1
561
+ 2
562
+ ≡ ( α
563
+ p) (mod π). Therefore, χ2
564
+ π(α) = (α
565
+ p), showing
566
+ in particular that the restriction of the quartic residue symbol to Z
567
+ defines a primitive quartic Dirichlet character of conductor p.
568
+ Lemma 3.2. Every primitive totally quartic character of odd conductor
569
+ is of the form χβ, where β = π1 . . . πk is a product of distinct primary
570
+ primes, (β, 2β) = 1, and where
571
+ (3.1)
572
+ χβ(α) =
573
+ �α
574
+ β
575
+
576
+ =
577
+ k
578
+
579
+ i=1
580
+ � α
581
+ πi
582
+
583
+ .
584
+ The totally quartic primitive characters of even conductor are of the
585
+ form χ2χβ where χ2 is one of four quartic characters of conductor 24,
586
+ and χβ is totally quartic of odd conductor.
587
+ Proof. We begin by classifying the quartic characters of odd prime-
588
+ power conductor. If p ≡ 3 (mod 4), there is no quartic character of
589
+ conductor pa, since φ(pa) = pa−1(p − 1) ̸≡ 0 (mod 4). Since φ(p) =
590
+ p − 1, if p ≡ 1 (mod 4), there are two distinct quartic characters of
591
+ conductor p, namely, χπ and χπ, where p = ππ. There are no primitive
592
+ quartic characters modulo pj for j ≥ 2.
593
+ To see this, suppose χ is
594
+ a character of conductor pj, and note that χ(1 + pj−1) ̸= 1, while
595
+ χ(1 + pj−1)p = χ(1 + pj) = 1, so χ(1 + pj−1) is a nontrivial pth root of
596
+ unity. Since p is odd, χ(1+pj−1) is not a 4th root of unity, so χ cannot
597
+ be quartic and primitive.
598
+ By the above classification, a primitive totally quartic character χ of
599
+ odd conductor must factor over distinct primes pi ≡ 1 (mod 4), and
600
+ the p-part of χ must be χπ or χπ, where ππ = p. We may assume that
601
+
602
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
603
+ 11
604
+ π and π are primary primes. Hence χ factors as �
605
+ i χπi. The property
606
+ that β := π1 . . . πk is squarefree is equivalent to the condition that the
607
+ πi are distinct. Moreover, the property (β, β) = 1 is equivalent to that
608
+ πiπi = pi ≡ 1 (mod 4), for all i. Hence, every quartic character of odd
609
+ conductor arises uniquely in the form (3.1).
610
+ Next we treat p = 2. There are four primitive quartic characters of
611
+ conductor 24, since (Z/(24))× ≃ Z/(2) × Z/(4). We claim there are no
612
+ primitive quartic characters of conductor 2j, with j ̸= 4. For j ≤ 3 or
613
+ j = 5 this is a simple finite computation. For j ≥ 6, one can show this
614
+ as follows. First, χ(1 + 2j−1) = −1, since χ2(1 + 2j−1) = χ(1 + 2j) = 1,
615
+ and primitivity shows χ(1+2j−1) ̸= 1. By a similar idea, χ(1+2j−2)2 =
616
+ χ(1 + 2j−1) = −1, so χ(1 + 2j−2) = ±i. We finish the claim by noting
617
+ χ2(1 + 2j−3) = χ(1 + 2j−2) = ±i, so χ(1 + 2j−3) is a square-root of
618
+ ±i, and hence χ is not quartic. With the claim established, we easily
619
+ obtain the final sentence of the lemma.
620
+
621
+ Example 3.3. The first totally quartic primitive character of compos-
622
+ ite conductor has conductor 65. While there are 8 quartic primitive
623
+ characters of conductor 65, the LMFDB labels of the totally quartic
624
+ ones are 65.18, 65.47, 65.8, and 65.57.
625
+ 3.2.2. Sextic characters. The construction of sextic characters uses the
626
+ arithmetic in the Eisenstein integers Z[ω], where ω = e2πi/3. The ring
627
+ Z[ω] has class number 1, unit group {±1, ±ω, ±ω2}, and discriminant
628
+ −3. We say α ∈ Z[ω] with (α, 3) = 1 is primary1 if α ≡ 1 (mod 3).
629
+ Warning: our usage of primary is consistent with [HBP79], but conflicts
630
+ with the definition of [IR90]. However, it is easy to translate since α is
631
+ primary in our sense if and only if −α is primary in the sense of [IR90].
632
+ Any element in Z[ω] coprime to 3 has a unique primary associate, which
633
+ comes from the fact that the unit group in the ring Z[ω]/(3) may be
634
+ identified with {±1, ±ω, ±ω2}. An unramified prime p ∈ Z splits as
635
+ p = ππ if and only if p ≡ 1 (mod 3). Given π with N(π) = p, define
636
+ the cubic residue symbol ( α
637
+ π)3 for α ∈ Z[ω] by ( α
638
+ π)3 ∈ {1, ω, ω2} and
639
+ ( α
640
+ π)3 ≡ α
641
+ p−1
642
+ 3
643
+ (mod π). The map χπ(α) = ( α
644
+ π)3 from (Z[ω]/(π))× to
645
+ {1, ω, ω2} is a character of order 3. The restriction of χπ to Z induces a
646
+ primitive cubic Dirichlet character of conductor p. Note that χπ = χ−π.
647
+ Motivated by the fact that a sextic character factors as a cubic times
648
+ a quadratic, we next discuss the classification of cubic characters.
649
+ 1We remark that the usage of primary is context-dependent, and that since we
650
+ do not mix quartic and sextic characters, we hope there will not be any ambiguity
651
+
652
+ 12
653
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
654
+ Lemma 3.4. Every primitive cubic Dirichlet character of conductor
655
+ coprime to 3 is of the form χβ, where β = π1 . . . πk is a product of
656
+ distinct primary primes, (β, 3β) = 1, and where
657
+ (3.2)
658
+ χβ(α) =
659
+ �α
660
+ β
661
+
662
+ 3 =
663
+ k
664
+
665
+ i=1
666
+ � α
667
+ πi
668
+
669
+ 3.
670
+ The cubic primitive characters of conductor divisible by 3 are of the
671
+ form χ3χβ where χ3 is one of two cubic characters of conductor 32,
672
+ and χβ is cubic of conductor coprime to 3.
673
+ Proof. The classification of such characters with conductor coprime to
674
+ 3 is given by [BY10, Lemma 2.1], so it only remains to treat cubic
675
+ characters of conductor 3j. The primitive character of conductor 3 is
676
+ not cubic. Next, the group (Z/(9))× is cyclic of order 6, generated by
677
+ 2. There are two cubic characters, determined by χ(2) = ω±1. Next
678
+ we argue that there is no primitive cubic character of conductor 3j
679
+ with j ≥ 3. For this, we first observe that χ(1 + 3j−1) = ω±1, since
680
+ primitivity implies χ(1 + 3j−1) ̸= 1, and χ(1 + 3j−1)3 = χ(1 + 3j) = 1.
681
+ Next we have χ(1 + 3j−2)3 = χ(1 + 3j−1) = ω±1, so χ(1 + 3j−2) is a
682
+ cube-root of ω±1. Therefore, χ cannot be cubic.
683
+
684
+ 3.3. Counting characters. To start, we count all the quartic and sex-
685
+ tic characters of conductor up to some bound and in each family. Such
686
+ counts were found for arbitrary order in [FMS10] by Finch, Martin and
687
+ Sebah, but since we are interested in only quartic and sextic charac-
688
+ ters, in which case the proofs simplify, we prove the results we need.
689
+ Moreover, we need other variants for which we cannot simply quote
690
+ [FMS10], so we will develop a bit of machinery that will be helpful for
691
+ these other questions as well.
692
+ We begin with a lemma based on the Perron formula.
693
+ Lemma 3.5. Suppose that a(n) is a multiplicative function such that
694
+ |a(n)| ≤ dk(n), the k-fold divisor function, for some k ≥ 0. Let Z(s) =
695
+
696
+ n≥1 a(n)n−s, for Re(s) > 1. Suppose that for some integer j ≥ 0,
697
+ (s−1)jZ(s) has a analytic continuation to a region of the form {σ+it :
698
+ σ > 1 −
699
+ c
700
+ log(2+|t|)}, for some c > 0. In addition, suppose that Z(s) is
701
+ bounded polynomially in log (2 + |t|) in this region. Then
702
+ (3.3)
703
+
704
+ n≤X
705
+ a(n) = XPj−1(log X) + O(X(log X)−100),
706
+ for Pj−1 some polynomial of degree ≤ j − 1 (interpreted as 0, if j = 0).
707
+
708
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
709
+ 13
710
+ The basic idea is standard, yet we were unable to find a suitable refer-
711
+ ence.
712
+ Proof sketch. One begins by a use of the quantitative Perron formula,
713
+ for which a convenient reference is [MV07, Thm. 5.2]. This implies
714
+ (3.4)
715
+
716
+ n≤X
717
+ a(n) =
718
+ 1
719
+ 2πi
720
+ � σ0+iT
721
+ σ0−iT
722
+ Z(s)Xsds
723
+ s + R,
724
+ where R is a remainder term, and we take σ0 = 1+
725
+ c
726
+ log X . Using [MV07,
727
+ Cor. 5.3] and standard bounds on mean values of dk(n), one can show
728
+ R ≪ X
729
+ T Poly(log X). Next one shifts the contour of integration to the
730
+ line 1 − c/2
731
+ log T . The pole (if it exists) of Z(s) leads to a main term of the
732
+ form XPj−1(log X), as desired. The new line of integration is bounded
733
+ by
734
+ (3.5)
735
+ Poly(log T)X1− c/2
736
+ log T .
737
+ Choosing log T = (log X)1/2 gives an acceptable error term.
738
+
739
+ 3.3.1. Quartic characters. Let Ψtot,odd
740
+ 4
741
+ (X) ⊆ Ψtot
742
+ 4 (X) denote the sub-
743
+ set of characters with odd conductor.
744
+ Proposition 3.6. For some constants Ktot
745
+ 4 , Ktot,odd
746
+ 4
747
+ > 0, we have
748
+ (3.6)
749
+ |Ψtot
750
+ 4 (X)| ∼ Ktot
751
+ 4 X,
752
+ and
753
+ |Ψtot,odd
754
+ 4
755
+ (X)| ∼ Ktot,odd
756
+ 4
757
+ X.
758
+ Moreover,
759
+ (3.7)
760
+ |Ψ′
761
+ 4(X)| ∼
762
+ X
763
+ log X .
764
+ Proof. By Lemma 3.2,
765
+ (3.8)
766
+ |Ψtot,odd
767
+ 4
768
+ (X)| =
769
+
770
+ 0̸=(β)⊆Z[i]
771
+ (β,2β)=1
772
+ β squarefree
773
+ N(β)≤X
774
+ 1,
775
+ and
776
+ (3.9)
777
+ |Ψtot
778
+ 4 (X)| = |Ψtot,odd
779
+ 4
780
+ (X)| + 4|Ψtot,odd
781
+ 4
782
+ (2−4X)|.
783
+
784
+ 14
785
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
786
+ To show (3.6), it suffices to prove the asymptotic formula for |Ψtot,odd
787
+ 4
788
+ (X)|.
789
+ In view of Lemma 3.5, it will suffice to understand the Dirichlet series
790
+ (3.10)
791
+ Z4(s) =
792
+
793
+ 0̸=(β)⊆Z[i]
794
+ (β,2β)=1
795
+ β squarefree
796
+ 1
797
+ N(β)s =
798
+
799
+ π̸=π
800
+ (π,2)=1
801
+ (1 + N(π)−s) =
802
+
803
+ p≡1 (mod 4)
804
+ (1 + p−s)2.
805
+ Let χ4 be the primitive character modulo 4, so that ζ(s)L(s, χ4) =
806
+ ζQ[i](s). Then
807
+ (3.11) Z4(s) = ζQ[i](s)
808
+
809
+ p
810
+ (1 − p−s)(1 − χ4(p)p−s)
811
+
812
+ p≡1 (mod 4)
813
+ (1 + p−s)2,
814
+ which can be simplified as
815
+ (3.12)
816
+ Z4(s) = ζQ[i](s)ζ−1(2s)(1 + 2−s)−1
817
+
818
+ p≡1 (mod 4)
819
+ (1 − p−2s).
820
+ Therefore, Z4(s) has a simple pole at s = 1, and its residue is a positive
821
+ constant. Moreover, the standard analytic properties of ζQ[i](s) let us
822
+ apply Lemma 3.5, giving the result.
823
+ The asymptotic on Ψ′
824
+ 4(X) follows from the prime number theorem in
825
+ arithmetic progressions, since there are two quartic characters of prime
826
+ conductor p ≡ 1 (mod 4), and none with p ≡ 3 (mod 4).
827
+
828
+ Lemma 3.7. We have
829
+ (3.13)
830
+ |Ψ4(X)| = K4X log X + O(X),
831
+ for some K4 > 0
832
+ Proof. Every primitive quartic character factors uniquely as χ4χ2 with
833
+ χ4 totally quartic of conductor q4 > 1 and χ2 quadratic of conductor
834
+ q2, with (q4, q2) = 1. It is convenient to drop the condition q4 > 1,
835
+ thereby including the quadratic characters; this is allowable since the
836
+ number of quadratic characters is O(X), which is acceptable for the
837
+ claimed error term.
838
+ The Dirichlet series for |Ψ4(X)|, modified to include the quadratic char-
839
+ acters, is
840
+ (3.14)
841
+ Zall
842
+ 4 (s) =
843
+
844
+ 0̸=(β)⊆Z[i]
845
+ (β,2β)=1
846
+ β squarefree
847
+ 1
848
+ N(β)s
849
+
850
+ q2∈Z≥1
851
+ (q2,2N(β))=1
852
+ 1
853
+ qs
854
+ 2
855
+ .
856
+
857
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
858
+ 15
859
+ A calculation with Euler products shows Zall
860
+ 4 (s) = ζQ[i](s)ζ(s)A(s),
861
+ where A(s) is given by an absolutely convergent Euler product for
862
+ Re(s) > 1/2. Since Zall
863
+ 4 (s) has a double pole at s = 1, this shows the
864
+ claim, using Lemma 3.5.
865
+
866
+ 3.3.2. Sextic characters. Next we turn to the sextic case. The proof of
867
+ the following proposition is similar to the proof of Proposition 3.6 and
868
+ so we omit it here.
869
+ Proposition 3.8. For some Ktot
870
+ 6
871
+ > 0, we have
872
+ (3.15)
873
+ |Ψtot
874
+ 6 (X)| ∼ Ktot
875
+ 6 X,
876
+ and
877
+ |Ψ′
878
+ 6(X)| ∼
879
+ X
880
+ log X .
881
+ A primitive totally sextic character factors uniquely as a primitive cubic
882
+ character (with odd conductor, since 2 ̸≡ 1 (mod 3)), times the Jacobi
883
+ symbol of the same modulus as the cubic character.
884
+ In general, a
885
+ primitive sextic character factors uniquely as χ6χ3χ2 of modulus q6q3q2,
886
+ pairwise coprime, with χ6 totally sextic of conductor q6, χ3 cubic of
887
+ conductor q3, and χ2 quadratic of conductor q2.
888
+ Lemma 3.9. We have |Ψ6(X)| = K6X(log X)2+O(X log X), for some
889
+ K6 > 0.
890
+ Proof. Write χ = χ6χ3χ2 as above. Note that membership in Ψ6(X)
891
+ requires q6 > 1, which is an unpleasant condition when working with
892
+ Euler products. However, the number of χ = χ3χ2, i.e., with χ6 = 1,
893
+ is O(X log X), so we may drop the condition q6 > 1 when estimating
894
+ |Ψ6(X)|.
895
+ For simplicity, we count the characters with q2 odd and (q6q3, 3) =
896
+ 1; the general case follows similar lines. The Dirichlet series for this
897
+ counting function is
898
+ Zall
899
+ 6 (s) =
900
+
901
+ 0̸=(β6)⊆Z[ω]
902
+ (β6,3β6)=1
903
+ β6 squarefree
904
+ 1
905
+ N(β6)s
906
+
907
+ 0̸=(β3)⊆Z[ω]
908
+ (β3,3β3)=1
909
+ β3 squarefree
910
+ (N(β3),N(β6)=1
911
+ 1
912
+ N(β3)s
913
+
914
+ q2∈Z≥1
915
+ (q2,2N(β3β6))=1
916
+ 1
917
+ qs
918
+ 2
919
+ .
920
+ A calculation with Euler products shows Zall
921
+ 6 (s) = ζQ[ω](s)2ζ(s)A(s),
922
+ where A(s) is given by an absolutely convergent Euler product for
923
+ Re(s) > 1/2. Since Zall
924
+ 6 (s) has a triple pole at s = 1, this shows the
925
+ claim, using Lemma 3.5.
926
+
927
+
928
+ 16
929
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
930
+ 3.4. Equidistribution of Gauss sums. We first focus on the quartic
931
+ case, and then turn to the sextic case.
932
+ 3.4.1. Quartic characters. The following standard formula can be found
933
+ as [IK04, (3.16)].
934
+ Lemma 3.10. Suppose that χ = χ1χ2 has conductor q = q1q2, with
935
+ (q1, q2) = 1, and χi of conductor qi. Then
936
+ (3.16)
937
+ τ(χ1χ2) = χ2(q1)χ1(q2)τ(χ1)τ(χ2).
938
+ Corollary 3.11. Let notation be as in Lemma 3.10. Suppose that χ is
939
+ totally quartic and q is odd. Then
940
+ (3.17)
941
+ τ(χ1χ2)2 = τ(χ1)2τ(χ2)2.
942
+ Proof. By Lemma 3.10, we will obtain the formula provided χ2
943
+ 2(q1)χ2
944
+ 1(q2) =
945
+ 1. Note that χ2
946
+ i is the Jacobi symbol, so χ2
947
+ 2(q1)χ2
948
+ 1(q2) = ( q1
949
+ q2)( q2
950
+ q1) = 1,
951
+ by quadratic reciprocity, using that q1 ≡ q2 ≡ 1 (mod 4).
952
+
953
+ Lemma 3.12. Suppose π ∈ Z[i] is a primary prime, with N(π) = p ≡
954
+ 1 (mod 4). Let χπ(x) = [ x
955
+ π] be the quartic residue symbol. Then
956
+ (3.18)
957
+ τ(χπ)2 = −χπ(−1)√pπ.
958
+ More generally, if β is primary, squarefree, with (β, 2β) = 1, then
959
+ (3.19)
960
+ τ(χβ)2 = µ(β)χβ(−1)
961
+
962
+ N(β)β.
963
+ Proof. The formula for χπ follows from [IR90, Thm.1 (Chapter 8),
964
+ Prop. 9.9.4]. The formula for general β follows from Corollary 3.11
965
+ and Lemma 3.2.
966
+
967
+ Lemma 3.13. Suppose that χ = χ2χ4 is a primitive quartic character
968
+ with odd conductor q, with χ2 quadratic of conductor q2, χ4 totally
969
+ quartic of conductor q4, and with q2q4 = q.
970
+ Then
971
+ (3.20)
972
+ τ(χ)2 =
973
+ �−q4
974
+ q2
975
+
976
+ q2τ(χβ)2.
977
+ Proof. By Lemma 3.10, we have τ(χ)2 = χ2(q4)2χ4(q2)2τ(χ2)2τ(χ4)2.
978
+ To simplify this, note χ2(q4)2 = 1, χ2
979
+ 4(q2) = (q2
980
+ q4) = (q4
981
+ q2), and τ(χ2)2 =
982
+ ϵ2
983
+ q2q2 = ( −1
984
+ q2 )q2.
985
+
986
+
987
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
988
+ 17
989
+ Our next goal is to express τ(χβ)2 in terms of a Hecke Grossencharacter.
990
+ Define
991
+ (3.21)
992
+ λ∞(α) = α
993
+ |α|,
994
+ α ∈ Z[i], α ̸= 0.
995
+ Next define a particular character λ1+i : R× → S1, where R = Z[i]/(1+
996
+ i)3, by
997
+ (3.22)
998
+ λ1+i(ik) = i−k,
999
+ k ∈ {0, 1, 2, 3}.
1000
+ This indeed defines a character since R× ≃ Z/4Z, generated by i. For
1001
+ α ∈ Z[i], (α, 1 + i) = 1, define
1002
+ (3.23)
1003
+ λ((α)) = λ1+i(α)λ∞(α).
1004
+ For this to be well-defined, we need that the right hand side of (3.23)
1005
+ is constant on units in Z[i]. This is easily seen, since λ∞(ik) = ik =
1006
+ λ1+i(ik)−1. Therefore, λ defines a Hecke Grossencharacter, as in [IK04,
1007
+ Section 3.8]. Moreover, we note that
1008
+ (3.24)
1009
+ τ(χβ)2
1010
+ N(β) = µ(β)
1011
+
1012
+ 2
1013
+ N(β)
1014
+
1015
+ λ((β))
1016
+ since this agrees with (3.19) for β primary, and is constant on units.
1017
+ According to [IK04, Theorem 3.8], the Dirichlet series
1018
+ (3.25)
1019
+ L(s, λk) =
1020
+
1021
+ 0̸=(β)⊆Z[i]
1022
+ λ((β))k
1023
+ N(β)s ,
1024
+ (k ∈ Z),
1025
+ defines an L-function having analytic continuation to s ∈ C with no
1026
+ poles except for k = 0. The same statement holds when twisting λk by
1027
+ a finite-order character.
1028
+ For k ∈ Z, define the Dirichlet series
1029
+ (3.26)
1030
+ Z(k, s) =
1031
+
1032
+ 0̸=(β)⊆Z[i]
1033
+ (β,2β)=1
1034
+ β squarefree
1035
+ (τ(χβ)2/N(β))k
1036
+ N(β)s
1037
+ ,
1038
+ Re(s) > 1.
1039
+ Proposition 3.14. Let δk = −1 for k odd, and δk = +1 for k even.
1040
+ We have
1041
+ (3.27) Z(k, s) = A(k, s)L(s, (λ · χ2)k)δk,
1042
+ where
1043
+ χ2(β) =
1044
+
1045
+ 2
1046
+ N(β)
1047
+
1048
+ ,
1049
+ and where A(k, s) is given by an Euler product absolutely convergent
1050
+ for Re(s) > 1/2.
1051
+
1052
+ 18
1053
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1054
+ In particular, the zero free region (as in [IK04, Theorem 5.35]) implies
1055
+ that Z(k, s) is analytic in a region of the type postulated in Lemma
1056
+ 3.5. Moreover, the proof of [MV07, Theorem 11.4] shows that Z(k, s)
1057
+ is bounded polynomially in log(2 + |t|) in this region.
1058
+ Proof. The formula (3.24) shows that Z(k, s) has an Euler product of
1059
+ the form
1060
+ (3.28)
1061
+ Z(k, s) =
1062
+
1063
+ (π)̸=(π)
1064
+ (1 + (−1)k χk
1065
+ 2(π)λk((π))
1066
+ N(π)s
1067
+ ).
1068
+ This is an Euler product over the split primes in Z[i]. We extend this
1069
+ to include the primes p ≡ 3 (mod 4) as well, with N(π) = p2. It is
1070
+ convenient to define χ2(1 + i) = 0, so we can freely extend the product
1071
+ to include the ramified prime 1 + i. In all, we get
1072
+ (3.29)
1073
+ Z(k, s) =
1074
+ � �
1075
+ p
1076
+ (1 − χk
1077
+ 2(p)λk(p)
1078
+ N(p)s
1079
+ )
1080
+ �−δk �
1081
+ p
1082
+ (1 + O(p−2s)).
1083
+ Note the product over p is L(s, (λ · χ2)k)δk, as claimed.
1084
+
1085
+ According to Weyl’s equidistribution criterion [IK04, Ch. 21.1], a se-
1086
+ quence of real numbers θn, 1 ≤ n ≤ N is equidistributed modulo 1 if
1087
+ and only if �
1088
+ n≤N e(kθn) = o(N) for each integer k ̸= 0. We apply
1089
+ this to e(θn) = (τ(χ)2/q), whence e(kθn) = (τ(χ)2/q)k. Due to the
1090
+ twisted multiplicativity formula (3.16), the congruence class in which
1091
+ 2k lies modulo ℓ may have a simplifying effect on τ(χ)2k. For instance,
1092
+ when ℓ = 4, then k even leads to a simpler formula than k odd. This
1093
+ motivates treating these cases separately. As a minor simplification,
1094
+ below we focus on the sub-family of characters of odd conductor. The
1095
+ even conductor case is only a bit different.
1096
+ Corollary 3.15. The Gauss sums τ(χ)2/q for χ totally quartic of odd
1097
+ conductor q, equidistribute on the unit circle.
1098
+ Proof. The complex numbers τ(χ)2/q lie on the unit circle.
1099
+ Weyl’s
1100
+ equidistribution criterion says that these normalized squared Gauss
1101
+ sums equidistribute on the unit circle provided
1102
+ (3.30)
1103
+
1104
+ 0̸=(β)⊆Z[i]
1105
+ (β,2β)=1
1106
+ β squarefree
1107
+ N(β)≤X
1108
+ (τ(χβ)2/N(β))k = o(X),
1109
+
1110
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1111
+ 19
1112
+ Figure 2. This histogram represents the distribution
1113
+ of the argument of the τ(χ)2/cond(χ) for totally quartic
1114
+ characters. Each histogram is made by calculating the
1115
+ Gauss sums of characters of each order up to prime and
1116
+ composite conductor 300000.
1117
+ for each nonzero integer k. In turn, this bound is implied by Propo-
1118
+ sition 3.14, using the zero-free region for the Hecke Grossencharacter
1119
+ L-functions in [IK04, Theorem 5.35].
1120
+
1121
+ To contrast this, we will show that the normalized Gauss sums τ(χ)2/q,
1122
+ with χ ranging over all quartic characters, equidistribute slowly. More
1123
+ precisely, we have the following result.
1124
+ Proposition 3.16. Let k ∈ 2Z, k ̸= 0. There exists ck ∈ C such that
1125
+ (3.31)
1126
+
1127
+ q≤X
1128
+ (q,2)=1
1129
+
1130
+ χ:χ4=1
1131
+ cond(χ)=q
1132
+ (τ(χ)2/q)k = ckX + o(X).
1133
+ Remark 3.17. Recall from Lemma 3.7 that the total number of such
1134
+ characters grows like X log X, so Proposition 3.16 shows that the rate
1135
+ of equidistribution is only O((log X)−1) here. In contrast, in the family
1136
+ of totally quartic characters, the GRH would imply a rate of equidis-
1137
+ tribution of the form O(X−1/2+ε). This difference in rates of equidis-
1138
+ tribution is supported by Figure 2 in which we see that the arguments
1139
+
1140
+ 5+00
1141
+ 4000
1142
+ DODE
1143
+ DOZ
1144
+ 1000
1145
+ E-
1146
+ -1
1147
+ 020
1148
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1149
+ of squares of the Gauss sums of totally quartic characters quickly con-
1150
+ verge to being uniformly distributed, as compared to the Gauss sums
1151
+ of all quartic characters.
1152
+ In addition, one can derive a similar result when restricting to χ ∈
1153
+ Ψ4(X), simply by subtracting off the contribution from the quadratic
1154
+ characters alone.
1155
+ Proof. As in Lemma 3.13, write χ = χ2χ4, with χ2 quadratic and χ4
1156
+ totally quartic. Then τ(χ)4/(q1q2)2 = τ(χ4)4/q2
1157
+ 2. The analog of Z(k, s),
1158
+ using k even to simplify, is
1159
+ (3.32)
1160
+ Zall(k, s) =
1161
+
1162
+ 0̸=(β)⊆Z[i]
1163
+ (β,2β)=1
1164
+ β squarefree
1165
+ τ(χβ)2k/N(β)k
1166
+ N(β)s
1167
+
1168
+ q2∈Z≥1
1169
+ (q2,2N(β))=1
1170
+ 1
1171
+ qs
1172
+ 2
1173
+ .
1174
+ Referring to the calculation in Proposition 3.14, we obtain
1175
+ (3.33)
1176
+ Zall(k, s) = ζ(s)L(s, λk)A(s),
1177
+ where A(s) is an Euler product absolutely convergent for Re(s) > 1/2.
1178
+ Since this generating function has a simple pole at s = 1, we deduce
1179
+ Proposition 3.16.
1180
+
1181
+ As mentioned above, in order to deduce equidistribution, by Weyl’s
1182
+ equidistribution criterion, we also need to consider odd values of k in
1183
+ (3.31). This is more technical than the case for even k, so we content
1184
+ ourselves with a conjecture.
1185
+ Conjecture 3.18. For each odd k, there exists δ > 0 such that
1186
+ (3.34)
1187
+
1188
+ q≤X
1189
+ (q,2)=1
1190
+
1191
+ χ:χ4=1
1192
+ cond(χ)=q
1193
+ (τ(χ)2/q)k ≪k,δ X1−δ.
1194
+ Remark 3.19. By Lemma 3.13 and (3.24), this problem reduces to
1195
+ understanding sums of the rough shape
1196
+
1197
+ β,q2
1198
+ q2N(β)≤X
1199
+ ��−N(β)
1200
+ q2
1201
+
1202
+ µ(β)
1203
+
1204
+ 2
1205
+ N(β)
1206
+
1207
+ λ((β))k,
1208
+ where we have omitted many of the conditions on β and q2. In the
1209
+ range where q2 is very small, the GRH gives cancellation in the sum
1210
+ over β. Conversely, in the range where N(β) is very small, the GRH
1211
+
1212
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1213
+ 21
1214
+ gives cancellation in the sum over q2. This discussion indicates that
1215
+ Conjecture 3.18 follows from GRH, with any δ < 1/4.
1216
+ Unconditionally, one can deduce some cancellation using the zero-free
1217
+ region for the β-sum (with q2 very small), and a subconvexity bound
1218
+ for the q2-sum (with N(β) very small). In the range where both q2
1219
+ and N(β) have some size, then Heath-Brown’s quadratic large sieve
1220
+ [HB95] gives some cancellation.
1221
+ Since we logically do not need an
1222
+ unconditional proof of equidistribution, we omit the details for brevity.
1223
+ Remark 3.20. Conjecture 3.18 and Proposition 3.16 together imply
1224
+ that the squares of the quartic Gauss sums do equidistribute in the full
1225
+ family Ψ4(X).
1226
+ 3.4.2. Sextic characters. Now we turn to the sextic Gauss sums.
1227
+ Lemma 3.21. Suppose that χ is totally sextic of conductor q, and say
1228
+ χ = χ2χ3 with χ2 quadratic and χ3 cubic, each of conductor q. Suppose
1229
+ χ3 = χβ, as in Lemma 3.4. Then
1230
+ (3.35)
1231
+ τ(χ) = µ(q)χ3(2)τ(χ2)τ(χ3)βq−1.
1232
+ Proof. By [IK04, (3.18)], τ(χ2)τ(χ3) = J(χ2, χ3)τ(χ), where J(χ2, χ3)
1233
+ is the Jacobi sum.
1234
+ It is easy to show using the Chinese remainder
1235
+ theorem that if χ2 = �
1236
+ p χ(p)
1237
+ 2
1238
+ and χ3 = �
1239
+ p χ(p)
1240
+ 3 , then
1241
+ (3.36)
1242
+ J(χ2, χ3) =
1243
+
1244
+ p
1245
+ J(χ(p)
1246
+ 2 , χ(p)
1247
+ 3 ).
1248
+ The Jacobi sum for characters of prime conductor can be evaluated
1249
+ explicitly using the following facts. By [Lem00, Prop. 4.30],
1250
+ (3.37)
1251
+ J(χ(p)
1252
+ 2 , χ(p)
1253
+ 3 ) = χ(p)
1254
+ 3 (22)J(χ(p)
1255
+ 3 , χ(p)
1256
+ 3 ).
1257
+ Suppose that χ(p)
1258
+ 3
1259
+ = χπ, where ππ = p, and π is primary. Then [IR90,
1260
+ Ch. 9, Lem. 1] implies J(χπ, χπ) = −π. (Warning: they state the
1261
+ value π instead of −π, but recall their definition of primary is opposite
1262
+ our convention. Also recall that χπ = χ−π.) Gathering the formulas,
1263
+ we obtain
1264
+ (3.38)
1265
+ τ(χ2)τ(χ3) = τ(χ)χ3(2)2 �
1266
+ πi|β
1267
+ (−πi) = τ(χ)χ3(2)2µ(q)β.
1268
+ Rearranging this and using ββ = q completes the proof.
1269
+
1270
+
1271
+ 22
1272
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1273
+ Corollary 3.22. Let conditions be as in Lemma 3.21. Then
1274
+ (3.39)
1275
+ τ(χ)2/q = χ3(4)
1276
+ �−1
1277
+ q
1278
+
1279
+ τ(χβ)2β
1280
+ 2/q2.
1281
+ Patterson [Pat78] showed that τ(χβ)/√q is uniformly distributed on
1282
+ the unit circle, as χβ ranges over primitive cubic characters. The same
1283
+ method gives equidistribution after multiplication by a Hecke Grossen-
1284
+ character, and so similarly to the quartic case above, we deduce:
1285
+ Corollary 3.23 (Patterson). The Gauss sums τ(χ)2/q, for χ totally
1286
+ sextic of conductor q, equidistribute on the unit circle.
1287
+ In light of Corollary 3.22, Proposition 3.16, and Conjecture 3.18, it
1288
+ seems reasonable to conjecture that the points τ(χ)2/q are equidis-
1289
+ tributed on the unit circle, as χ varies over all sextic characters. To
1290
+ see a limitation in the rate of equidistribution, it is convenient to con-
1291
+ sider τ(χ)6/q3, which is multiplicative for χ sextic. For q ≡ 1 (mod 4),
1292
+ and χ = χ2 quadratic, we have τ(χ2)2/q = 1, so the quadratic part is
1293
+ constant. For χ cubic and q ≡ 1 (mod 4),
1294
+ (3.40)
1295
+ τ(χβ)6/q3 = µ(β)τ(χβ)3β
1296
+ 3 = q−1β
1297
+ 2,
1298
+ which is nearly a Hecke Grossencharacter. A similar formula holds for
1299
+ χ totally sextic, namely
1300
+ (3.41)
1301
+ τ(χ)6/q3 = q−4β
1302
+ 8.
1303
+ Therefore, carrying out the same steps as in Proposition 3.16 shows
1304
+ that
1305
+ (3.42)
1306
+
1307
+ q≤X
1308
+ q≡1 (mod 4)
1309
+
1310
+ χ∈Ψ6
1311
+ cond(χ)=q
1312
+
1313
+ τ(χ)6/q3�k
1314
+ = CkX + o(X).
1315
+ This is less of an obstruction than in the quartic case, since here the
1316
+ rate of equidistribution is O((log X)−2) instead of O((log X)−1), due to
1317
+ the fact that |Ψ6(X)| is approximately log X times as large as |Ψ4(X)|.
1318
+ Similarly to the discussion of the quartic case in Remarks 3.19 and
1319
+ 3.20, we make the following conjecture without further explanation.
1320
+ Conjecture 3.24. The Gauss sums τ(χ)2/q, for χ ranging in Ψ6(X),
1321
+ equidistribute on the unit circle.
1322
+
1323
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1324
+ 23
1325
+ 3.5. Estimates for quartic and sextic characters. In order to ap-
1326
+ ply the random matrix theory conjectures, we need variants on Propo-
1327
+ sition 3.6, Lemma 3.7, Proposition 3.8, and Lemma 3.9, as follows.
1328
+ Lemma 3.25. For primitive Dirichlet characters χ of order ℓ we have
1329
+ for ℓ = 4 and ℓ = 6 that
1330
+ (3.43)
1331
+
1332
+ χ∈Ψℓ(X)
1333
+ 1
1334
+
1335
+ cond(χ)
1336
+ ∼ 2Kℓ
1337
+
1338
+ X(log X)d(ℓ)−2,
1339
+ and
1340
+ (3.44)
1341
+
1342
+ χ∈Ψtot
1343
+
1344
+ (X)
1345
+ 1
1346
+
1347
+ cond(χ)
1348
+ ∼ 2Ktot
1349
+
1350
+
1351
+ X,
1352
+
1353
+ χ∈Ψ′
1354
+ ℓ(X)
1355
+ 1
1356
+
1357
+ cond(χ)
1358
+ ∼ 2
1359
+
1360
+ X
1361
+ log X .
1362
+ Proof. These estimates follow from a straightforward application of
1363
+ partial summation or from a minor modification of Lemma 3.5 since
1364
+ the generating Dirichlet series for one of these sums has its pole at
1365
+ s = 1/2 instead of at s = 1.
1366
+
1367
+ 4. Random matrix theory: Conjectural asymptotic
1368
+ behavior
1369
+ This section closely follows the exposition of §3 of [DFK04] and §4 of
1370
+ [DFK07].
1371
+ Let U(N) be the set of unitary N×N matrices with complex coefficients
1372
+ which forms a probability space with respect to the Haar measure.
1373
+ For a family of L-functions with symmetry type U(N), Katz and Sar-
1374
+ nak conjectured that the statistics of the low-lying zeros should agree
1375
+ with those of the eigenangles of random matrices in U(N) [KS99]. Let
1376
+ PA(λ) = det(A − λI) be the characteristic polynomial of A. Keating
1377
+ and Snaith [KS00] suggest that the distribution of the values of the L-
1378
+ functions at the critical point is related to the value distribution of the
1379
+ characteristic polynomials |PA(1)| with respect to the Haar measure on
1380
+ U(N).
1381
+ For any s ∈ C we consider the moments
1382
+ MU(s, N) :=
1383
+
1384
+ U(N)
1385
+ |PA(1)|s dHaar
1386
+
1387
+ 24
1388
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1389
+ for the distribution of |PA(1)| in U(N) with respect to the Haar mea-
1390
+ sure. In [KS00], Keating and Snaith proved that
1391
+ (4.1)
1392
+ MU(s, N) =
1393
+ N
1394
+
1395
+ j=1
1396
+ Γ(j)Γ(j + s)
1397
+ Γ2(j + s/2) ,
1398
+ so that MU(s, N) is analytic for Re(s) > −1 and has meromorphic
1399
+ continuation to the whole complex plane. The probability density of
1400
+ |PA(1)| is given by the Mellin transform
1401
+ pU(x, N) =
1402
+ 1
1403
+ 2πi
1404
+
1405
+ Re(s)=c
1406
+ MU(s, N)x−s−1 ds,
1407
+ for some c > −1.
1408
+ In the applications to the vanishing of twisted L-functions we consider
1409
+ in this paper, we are only interested in small values of x where the value
1410
+ of pU(x, N) is determined by the first pole of MU(s, N) at s = −1. More
1411
+ precisely, for x ≤ N −1/2, one can show that
1412
+ pU(x, N) ∼ G2(1/2)N 1/4
1413
+ as N → ∞,
1414
+ where G(z) is the Barnes G-function with special value [Bar99]
1415
+ G(1/2) = exp
1416
+ �3
1417
+ 2ζ′(−1) − 1
1418
+ 4 log π + 1
1419
+ 24 log 2
1420
+
1421
+ .
1422
+ We will now consider the moments for the special values of twists of
1423
+ L-functions.
1424
+ We then define, for any s ∈ C, the following sum of
1425
+ evaluations at s = 1 of L-functions primitive order ℓ characters of
1426
+ conductor less than X:
1427
+ (4.2)
1428
+ ME(s, X) =
1429
+ 1
1430
+ #FΨℓ,E(X)
1431
+
1432
+ L(E,s,χ)∈FΨℓ,E(X)
1433
+ |L(E, 1, χ)|s.
1434
+ Then, since the families of twists of order ℓ are expected to have unitary
1435
+ symmetry, we have
1436
+ Conjecture 4.1 (Keating and Snaith Conjecture for twists of order
1437
+ ℓ). With the notation as above,
1438
+ ME(s, X) ∼ aE(s/2)MU(s, N)
1439
+ as N = 2 log X → ∞,
1440
+ where aE(s/2) is an arithmetic factor depending only on the curve E.
1441
+
1442
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1443
+ 25
1444
+ From Conjecture 4.1, the probability density for the distribution of the
1445
+ special values |L(E, 1, χ)| for characters of order ℓ is
1446
+ pE(x, X)
1447
+ =
1448
+ 1
1449
+ 2πi
1450
+
1451
+ Re(s)=c
1452
+ ME(s, X)x−s−1 ds
1453
+ (4.3)
1454
+
1455
+ 1
1456
+ 2πi
1457
+
1458
+ Re(s)=c
1459
+ aE(s/2)MU(s, N)x−s−1 ds
1460
+ (4.4)
1461
+ as N = 2 log X → ∞. As above, when x ≤ N −1/2, the value of pE(x, X)
1462
+ is determined by the residue of MU(s, N) at s = −1, thus it follows
1463
+ from (4.4) that for x ≤ (2 log X)−1/2,
1464
+ (4.5)
1465
+ pE(x, X) ∼ 21/4aE(−1/2)G2(1/2) log1/4(X)
1466
+ as X → ∞.
1467
+ We now use the probability density of the random matrix model with
1468
+ the properties of the integers nE(χ) to obtain conjectures for the van-
1469
+ ishing of the L-values |L(E, 1, χ)|. When χ is either quartic or sextic,
1470
+ the discretization nE(χ) is a rational integer since Z[ζℓ] ∩ R = Z when
1471
+ ℓ = 4 or 6.
1472
+ Lemma 4.2. Let χ be a primitive Dirichlet character of order ℓ = 4
1473
+ or 6. Then
1474
+ |L(E, 1, χ)| =
1475
+ cE,ℓ
1476
+
1477
+ cond(χ)
1478
+ |nE(χ)|,
1479
+ where cE,ℓ is a nonzero constant which depends only on the curve E
1480
+ and ℓ.
1481
+ Proof. By rearranging equation (2.2) we obtain
1482
+ |L(E, 1, χ)| =
1483
+ ����
1484
+ Ωϵ(E) τ(χ) kE nE(χ)
1485
+ cond(χ)
1486
+ ���� = |Ωϵ(E) kE nE(χ)|
1487
+
1488
+ cond(χ)
1489
+ = cE,ℓ|nE(χ)|
1490
+
1491
+ cond(χ)
1492
+ ,
1493
+ where the nonzero constant kE is that of Proposition 2.2.
1494
+
1495
+ We write
1496
+ (4.6)
1497
+ Prob{|L(E, 1, χ)| = 0} = Prob{|L(E, 1, χ)| < B(cond(χ))},
1498
+ for some function B(cond(χ)) of the character. By Lemma 4.2 we may
1499
+ take B(cond(χ)) =
1500
+ cE,ℓ
1501
+
1502
+ cond(χ)
1503
+ . Note that since cE,ℓ ̸= 0, if
1504
+ |nE(χ)|cE,ℓ
1505
+
1506
+ cond(χ)
1507
+ <
1508
+ cE,ℓ
1509
+
1510
+ cond(χ)
1511
+ ,
1512
+ then |nE(χ)| < 1 and hence must vanish since |nE(χ)| ∈ Z≥0.
1513
+
1514
+ 26
1515
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1516
+ Using (4.5), we have
1517
+ Prob{|L(E, 1, χ)| = 0}
1518
+ =
1519
+ � B(cond(χ))
1520
+ 0
1521
+ 21/4aE(−1/2)G2(1/2) log1/4(X) dx
1522
+ =
1523
+ 21/4aE(−1/2)G2(1/2) log1/4(X)B(cond(χ))
1524
+ Summing the probabilities gives
1525
+ |VΨℓ,E(X)| = 21/4cE,kaE(−1/2)G2(1/2) log1/4(X)
1526
+
1527
+ cond(χ)≤X
1528
+ 1
1529
+
1530
+ cond(χ)
1531
+ .
1532
+ Thus, by the analysis in §3.3, we have
1533
+ |VΨ4,E(X)| ∼ 25/4cE,4K4aE(−1/2)G2(1/2) log1/4(X)
1534
+
1535
+ X log X
1536
+ ∼ bE,4X1/2 log5/4 X
1537
+ and
1538
+ |VΨ6,E(X)| ∼ 25/4cE,6K6aE(−1/2)G2(1/2) log1/4(X)
1539
+
1540
+ X(log X)2
1541
+ ∼ bE,6X1/2 log9/4 X
1542
+ as X → ∞.
1543
+ Moreover, if we restrict to those characters that are totally quartic or
1544
+ sextic, we get the following estimates
1545
+ |VΨtot
1546
+ 4 ,E(X)| ∼ 25/4cE,4Ktot
1547
+ 4 aE(−1/2)G2(1/2) log1/4(X)
1548
+
1549
+ X
1550
+ ∼ btot
1551
+ E,4X1/2 log1/4 X
1552
+ and
1553
+ |VΨtot
1554
+ 6 ,E(X)| ∼ 25/4cE,6Ktot
1555
+ 6 aE(−1/2)G2(1/2)
1556
+ ∼ btot
1557
+ E,6X1/2 log1/4 X
1558
+ as X → ∞.
1559
+ Finally, if we restrict only to those twists by characters of prime con-
1560
+ ductor, we conclude
1561
+ |VΨ′
1562
+ 4,E(X)| ∼ 25/4cE,4aE(−1/2)G2(1/2) log1/4(X)
1563
+
1564
+ X
1565
+ log X
1566
+ ∼ b′
1567
+ E,4X1/2 log−3/4 X
1568
+
1569
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1570
+ 27
1571
+ and
1572
+ |VΨ′
1573
+ 6,E(X)| ∼ 25/4cE,6aE(−1/2)G2(1/2) log1/4(X)
1574
+
1575
+ X
1576
+ log X
1577
+ ∼ b′
1578
+ E,6X1/2 log−3/4 X
1579
+ as X → ∞.
1580
+ 4.1. Computations. Here we provide numerical evidence for Conjec-
1581
+ ture 1.1. The computations of the Conrey labels for the characters were
1582
+ done in SageMath [Sag21] and the computations of the L-functions were
1583
+ done in PARI/GP [PAR22]. The L-function computations were done
1584
+ in a distributed way on the Open Science Grid. For each curve, we
1585
+ generated a PARI/GP script to calculate a twisted L-function for each
1586
+ primitive character of order 4 and 6, and then combined the results into
1587
+ one file at the end. The combined wall time of all the computations
1588
+ was more than 50 years. The code and data are available at [BR23].
1589
+ In Figure 3 we plot the points
1590
+ (X, X1/2 log5/4 X
1591
+ |VΨ4,11.a.1(X)|), (X, X1/2 log−3/4 X
1592
+ |VΨ′
1593
+ 4,11.a.1(X)| ), (X,
1594
+ X1/2 log1/4 X
1595
+ |VΨtot
1596
+ 4
1597
+ ,11.a.1(X)|)
1598
+ that provides a comparison between the predicted vanishings of L(E, 1, χ)
1599
+ for quartic characters and for the curve 11.a.1. In Figure 4 we plot the
1600
+ analogous points for the same curve but for sextic twists. In Figure 5
1601
+ we plot the points
1602
+ (X, X1/2 log−3/4 X
1603
+ |VΨ′
1604
+ 4,37.a.1(X)| ), (X, X1/2 log−3/4 X
1605
+ |VΨ′
1606
+ 6,37.a.1(X)| )
1607
+ Even though we are most interested in the families of all quartic and
1608
+ sextic twists, we include the families of twists of prime conductor be-
1609
+ cause there are far fewer such characters and so we can calculate the
1610
+ number of vanishings up to a much larger X. We include the fami-
1611
+ lies of twists by totally quartic and sextic characters to highlight the
1612
+ transition between the family of prime conductors and the family of all
1613
+ conductors.
1614
+ References
1615
+ [Bar99]
1616
+ E.W. Barnes. The theory of the G-function. Quart. J. Math., 31:264–314,
1617
+ 1899.
1618
+ [BCDT01] Christophe Breuil, Brian Conrad, Fred Diamond, and Richard Taylor.
1619
+ On the modularity of elliptic curves over Q: wild 3-adic exercises. Journal
1620
+ of the American Mathematical Society, pages 843–939, 2001.
1621
+ [BE81]
1622
+ Bruce C Berndt and Ronald J Evans. The determination of Gauss sums.
1623
+ Bulletin of the American Mathematical Society, 5(2):107–129, 1981.
1624
+
1625
+ 28
1626
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1627
+ (a)
1628
+ The
1629
+ ratio
1630
+ of
1631
+ predicted
1632
+ vanishings
1633
+ to
1634
+ empirical
1635
+ van-
1636
+ ishings
1637
+ of
1638
+ twists
1639
+ of
1640
+ the curve 11.a.1 by
1641
+ quartic characters of
1642
+ conductor ≤ 700000.
1643
+ (b) The ratio of pre-
1644
+ dicted
1645
+ vanishings
1646
+ to
1647
+ empirical
1648
+ vanishings
1649
+ of twists of the curve
1650
+ 11.a.1
1651
+ by
1652
+ quartic
1653
+ characters
1654
+ of
1655
+ prime
1656
+ conductor ≤ 2000000.
1657
+ (c) The ratio of pre-
1658
+ dicted
1659
+ vanishings
1660
+ to
1661
+ empirical
1662
+ vanishings
1663
+ of twists of the curve
1664
+ 11.a.1
1665
+ by
1666
+ totally
1667
+ quartic characters of
1668
+ conductor ≤ 700000.
1669
+ Figure 3. Verification of Conjecture 1.1 for quartic
1670
+ twists of 11.a.1.
1671
+ (a) The ratio of pre-
1672
+ dicted
1673
+ vanishings
1674
+ to
1675
+ empirical vanishings of
1676
+ twists
1677
+ of
1678
+ the
1679
+ curve
1680
+ 11.a.1 by sextic char-
1681
+ acters of conductor ≤
1682
+ 300000.
1683
+ (b) The ratio of pre-
1684
+ dicted
1685
+ vanishings
1686
+ to
1687
+ empirical vanishings of
1688
+ twists
1689
+ of
1690
+ the
1691
+ curve
1692
+ 11.a.1 by sextic char-
1693
+ acters of prime con-
1694
+ ductor ≤ 2000000.
1695
+ (c) The ratio of pre-
1696
+ dicted
1697
+ vanishings
1698
+ to
1699
+ empirical vanishings of
1700
+ twists
1701
+ of
1702
+ the
1703
+ curve
1704
+ 11.a.1 by totally sex-
1705
+ tic characters of con-
1706
+ ductor ≤ 300000.
1707
+ Figure 4. Verification of Conjecture 1.1 for sextic
1708
+ twists of 11.a.1.
1709
+ [BR23]
1710
+ Jen Berg and Nathan C. Ryan. Code and data for quartic and sextic
1711
+ twists of elliptic curve L-functions. http://eg.bucknell.edu/~ncr006/
1712
+ quartic-sextic-twists-website/, 2023.
1713
+ [BY10]
1714
+ Stephan Baier and Matthew P. Young. Mean values with cubic characters.
1715
+ J. Number Theory, 130(4):879–903, 2010.
1716
+ [CFK+05] J. Brian Conrey, David W Farmer, Jon P Keating, Michael O Rubin-
1717
+ stein, and Nina C Snaith. Integral moments of L-functions. Proceedings
1718
+ of the London Mathematical Society, 91(1):33–104, 2005.
1719
+ [Cho87]
1720
+ Sarvadaman Chowla. The Riemann hypothesis and Hilbert’s tenth prob-
1721
+ lem, volume 4. CRC Press, 1987.
1722
+
1723
+ 0.38
1724
+ 0.36
1725
+ tE'O
1726
+ 0.32
1727
+ 0.30
1728
+ 0.28
1729
+ 0.26
1730
+ 0.D0
1731
+ 0.25
1732
+ 0.50
1733
+ 0.75
1734
+ LDo
1735
+ 125
1736
+ 150
1737
+ 175
1738
+ 20o
1739
+ 1e614
1740
+ 12
1741
+ LD
1742
+ 0.B
1743
+ 0.6
1744
+ 0
1745
+ 11
1746
+ 9 -
1747
+ 8 -
1748
+ 1
1749
+ 61
1750
+ 50000
1751
+ 150000
1752
+ DO
1753
+ 250000
1754
+ 3000000.45
1755
+ 0.40 -
1756
+ 0.35
1757
+ 0.30
1758
+ 0.25
1759
+ 0.D0
1760
+ 0.25
1761
+ 0.50
1762
+ 0.75
1763
+ Lio
1764
+ 125
1765
+ 150
1766
+ 175
1767
+ 200
1768
+ 1e616
1769
+ 15
1770
+ 14
1771
+ 13
1772
+ 12
1773
+ 11
1774
+ LD
1775
+ 0
1776
+ DODS
1777
+ 10dC0
1778
+ 150000
1779
+ 240000
1780
+ 25000
1781
+ 3+0dC04.5
1782
+ 4.D
1783
+ 3.5-
1784
+ 3.0
1785
+ 25 -
1786
+ 20
1787
+ 15
1788
+ LD
1789
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1790
+ 29
1791
+ (a) The ratio of pre-
1792
+ dicted
1793
+ vanishings
1794
+ to
1795
+ empirical
1796
+ vanishings
1797
+ of twists of the curve
1798
+ 37.a.1
1799
+ by
1800
+ quartic
1801
+ characters
1802
+ of
1803
+ prime
1804
+ conductor ≤ 2000000.
1805
+ (b) The ratio of pre-
1806
+ dicted
1807
+ vanishings
1808
+ to
1809
+ empirical vanishings of
1810
+ twists
1811
+ of
1812
+ the
1813
+ curve
1814
+ 37.a.1 by sextic char-
1815
+ acters of prime con-
1816
+ ductor ≤ 2000000.
1817
+ Figure 5. Verification of parts of Conjecture 1.1 for
1818
+ twists of 37.a.1.
1819
+ [CKRS00] JB Conrey, JP Keating, MO Rubinstein, and NC Snaith. On the fre-
1820
+ quency of vanishing of quadratic twists of modular L-functions. In Pro-
1821
+ ceedings of the Millennial Conference on Number Theory, Urbana, Illinois,
1822
+ 21-26 May, 2000. AK Peters, 2000.
1823
+ [DFK04] Chantal David, Jack Fearnley, and Hershy Kisilevsky. On the vanishing of
1824
+ twisted L-functions of elliptic curves. Experiment. Math., 13(2):185–198,
1825
+ 2004.
1826
+ [DFK07] Chantal David, Jack Fearnley, and Hershy Kisilevsky. Vanishing of L-
1827
+ functions of elliptic curves over number fields. In Ranks of elliptic curves
1828
+ and random matrix theory, volume 341 of London Math. Soc. Lecture Note
1829
+ Ser., pages 247–259. Cambridge Univ. Press, Cambridge, 2007.
1830
+ [FMS10] Steven Finch, Greg Martin, and Pascal Sebah. Roots of unity and
1831
+ nullity modulo n. Proceedings of the American Mathematical Society,
1832
+ 138(8):2729–2743, 2010.
1833
+ [HB95]
1834
+ D. R. Heath-Brown. A mean value estimate for real character sums. Acta
1835
+ Arith., 72(3):235–275, 1995.
1836
+ [HBP79] D. R. Heath-Brown and S. J. Patterson. The distribution of Kummer
1837
+ sums at prime arguments. J. Reine Angew. Math., 310:111–130, 1979.
1838
+ [IK04]
1839
+ Henryk Iwaniec and Emmanuel Kowalski. Analytic number theory, vol-
1840
+ ume 53 of American Mathematical Society Colloquium Publications.
1841
+ American Mathematical Society, Providence, RI, 2004.
1842
+ [IR90]
1843
+ Kenneth Ireland and Michael Rosen. A classical introduction to modern
1844
+ number theory, volume 84 of Graduate Texts in Mathematics. Springer-
1845
+ Verlag, New York, second edition, 1990.
1846
+ [KS99]
1847
+ Nicholas Katz and Peter Sarnak. Zeroes of zeta functions and symmetry.
1848
+ Bulletin of the American Mathematical Society, 36(1):1–26, 1999.
1849
+ [KS00]
1850
+ Jon P Keating and Nina C Snaith. Random matrix theory and L-functions
1851
+ at s = 1/2. Communications in Mathematical Physics, 214(1):91–100,
1852
+ 2000.
1853
+
1854
+ 0.32
1855
+ 0.28
1856
+ 0.26
1857
+ 0.24
1858
+ 0.22
1859
+ 0.0O
1860
+ 0.25
1861
+ 0.50
1862
+ 0.75
1863
+ 125
1864
+ 150
1865
+ 175
1866
+ 2b0
1867
+ 1e6SLEO
1868
+ 0.350
1869
+ 0.325
1870
+ 0.300 -
1871
+ 0.275
1872
+ 0.250
1873
+ 0.225
1874
+ 0.200
1875
+ 0.175
1876
+ 0.DO
1877
+ 0.25
1878
+ 0.50
1879
+ 0.75
1880
+ LDo
1881
+ 125
1882
+ 150
1883
+ 175
1884
+ 2o
1885
+ le630
1886
+ JENNIFER BERG, NATHAN C. RYAN, AND MATTHEW P. YOUNG
1887
+ [Lem00] Franz Lemmermeyer. Reciprocity laws. Springer Monographs in Mathe-
1888
+ matics. Springer-Verlag, Berlin, 2000. From Euler to Eisenstein.
1889
+ [MR21]
1890
+ Barry Mazur and Karl Rubin. Arithmetic conjectures suggested by the
1891
+ statistical behavior of modular symbols. Experimental Mathematics, pages
1892
+ 1–16, 2021.
1893
+ [MV07]
1894
+ Hugh L. Montgomery and Robert C. Vaughan. Multiplicative number the-
1895
+ ory. I. Classical theory, volume 97 of Cambridge Studies in Advanced
1896
+ Mathematics. Cambridge University Press, Cambridge, 2007.
1897
+ [PAR22] PARI Group, Univ. Bordeaux. PARI/GP version 2.13.4, 2022. available
1898
+ from http://pari.math.u-bordeaux.fr/.
1899
+ [Pat78]
1900
+ S. J. Patterson. On the distribution of Kummer sums. J. Reine Angew.
1901
+ Math., 303(304):126–143, 1978.
1902
+ [Pat87]
1903
+ Samuel J Patterson. The distribution of general Gauss sums and simi-
1904
+ lar arithmetic functions at prime arguments. Proceedings of the London
1905
+ Mathematical Society, 3(2):193–215, 1987.
1906
+ [PHH81] SJ Patterson, H Halberstam, and C Hooley. The distribution of general
1907
+ Gauss sums at prime arguments. Progress in Analytic Number Theory,
1908
+ 2:171–182, 1981.
1909
+ [PPK+07] Ruth Pordes, Don Petravick, Bill Kramer, Doug Olson, Miron Livny,
1910
+ Alain Roy, Paul Avery, Kent Blackburn, Torre Wenaus, Frank W¨urthwein,
1911
+ Ian Foster, Rob Gardner, Mike Wilde, Alan Blatecky, John McGee, and
1912
+ Rob Quick. The open science grid. In J. Phys. Conf. Ser., volume 78 of
1913
+ 78, page 012057, 2007.
1914
+ [Sag21]
1915
+ Sage Developers. SageMath, the Sage Mathematics Software System (Ver-
1916
+ sion 9.4), 2021. https://www.sagemath.org.
1917
+ [SBH+09] Igor Sfiligoi, Daniel C Bradley, Burt Holzman, Parag Mhashilkar, San-
1918
+ jay Padhi, and Frank Wurthwein. The pilot way to grid resources using
1919
+ glideinwms. In 2009 WRI World Congress on Computer Science and In-
1920
+ formation Engineering, volume 2 of 2, pages 428–432, 2009.
1921
+ [SY10]
1922
+ K. Soundararajan and Matthew P. Young. The second moment of
1923
+ quadratic twists of modular L-functions. J. Eur. Math. Soc. (JEMS),
1924
+ 12(5):1097–1116, 2010.
1925
+ [TW95]
1926
+ Richard Taylor and Andrew Wiles. Ring-theoretic properties of certain
1927
+ Hecke algebras. Annals of Mathematics, 141(3):553–572, 1995.
1928
+ [Wil95]
1929
+ Andrew Wiles. Modular elliptic curves and Fermat’s last theorem. Annals
1930
+ of mathematics, 141(3):443–551, 1995.
1931
+ [WW20] Hanneke Wiersema and Christian Wuthrich. Integrality of twisted l-values
1932
+ of elliptic curves, 2020.
1933
+ Email address: jsb047@bucknell.edu
1934
+ Email address: nathan.ryan@bucknell.edu
1935
+ Department of Mathematics, Bucknell University, Lewisburg, PA 17837
1936
+ Email address: myoung@math.tamu.edu
1937
+
1938
+ VANISHING OF QUARTIC AND SEXTIC TWISTS OF L-FUNCTIONS
1939
+ 31
1940
+ Department of Mathematics, Texas A&M University, College Station,
1941
+ TX 77843-3368
1942
+
DdE4T4oBgHgl3EQf6A7h/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
EdE2T4oBgHgl3EQfSgfT/content/2301.03794v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a499219f2d11a4075e042bb3576ce0a73c54ac2066f036f68fea3a3e389d22c0
3
+ size 1109781
EdE2T4oBgHgl3EQfSgfT/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f4c6fb29f10acb99336299d640eefc7efcd53edef802951b395b8c7e64abd55
3
+ size 2752557
EdE2T4oBgHgl3EQfSgfT/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:455747f96e95402cfc231e6745242eac9faa91ceb51af6bb4c30d1eb166df6a3
3
+ size 98078
EdFRT4oBgHgl3EQfBDfd/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b1fc3d061b8e034cbf32cefc740daa592490d940378ab5aca42d58272e80700
3
+ size 8388653
GdE1T4oBgHgl3EQf_Aah/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa13dfac1e9ac35e79483c534e67d363f84ea462e8a0ea6bf33f1e6aa48907d6
3
+ size 7077933
HNE4T4oBgHgl3EQfHwxv/content/tmp_files/2301.04906v1.pdf.txt ADDED
@@ -0,0 +1,1679 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Practical challenges in data-driven
2
+ interpolation: dealing with noise, enforcing
3
+ stability, and computing realizations
4
+ Quirin Aumann∗
5
+ Ion Victor Gosea†
6
+ ∗Max Planck Instiute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg,
7
+ Germany.
8
+ Email: aumann@mpi-magdeburg.mpg.de, ORCID: 0000-0001-7942-5703
9
+ †Max Planck Instiute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg,
10
+ Germany.
11
+ Email: gosea@mpi-magdeburg.mpg.de, ORCID: 0000-0003-3580-4116
12
+ Abstract: In this contribution, we propose a detailed study of interpolation-based data-
13
+ driven methods that are of relevance in the model reduction and also in the systems and
14
+ control communities. The data are given by samples of the transfer function of the under-
15
+ lying (unknown) model, i.e., we analyze frequency-response data. We also propose novel
16
+ approaches that combine some of the main attributes of the established methods, for ad-
17
+ dressing particular issues.
18
+ This includes placing poles and hence, enforcing stability of
19
+ reduced-order models, robustness to noisy or perturbed data, and switching from different
20
+ rational function representations. We mention here the classical state-space format and
21
+ also various barycentric representations of the fitted rational interpolants. We show that
22
+ the newly-developed approaches yield, in some cases, superior numerical results, when com-
23
+ paring to the established methods. The numerical results include a thorough analysis of
24
+ various aspects related to approximation errors, choice of interpolation points, or placing
25
+ dominant poles, which are tested on some benchmark models and data-sets.
26
+ Keywords: Data-driven methods, rational approximation, interpolatory methods, least
27
+ squares fit, Loewner framework, frequency response data, pole placement, noisy measure-
28
+ ments, Loewner and Cauchy matrices.
29
+ Novelty statement: This note shows that by combining the features of established data-
30
+ driven rational approximation methods based on interpolation (and/or least squares fit),
31
+ one can devise methods that offer additional important advantages. These include stabil-
32
+ ity enforcement by placing poles in an elegant and numerically stable manner, together
33
+ with robustness to noisy data.
34
+ 1. Introduction
35
+ Approximation of large-scale dynamical systems is pivotal for serving the scopes of efficient simulation
36
+ and designing control laws in real-time. The technique for reducing the complexity of a system is known
37
+ as model order reduction (MOR) [1, 5, 13, 14]. There exist a number of methodologies for reducing
38
+ large-scale models, and each method is tailored to some specific applications (mostly, but not restricted
39
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
40
+ 2023-01-13
41
+ arXiv:2301.04906v1 [math.NA] 12 Jan 2023
42
+
43
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
44
+ 2
45
+ to mechanical and electrical engineering) and to achieving certain goals (stability, passivity or structure
46
+ preservation), on top of the complexity reduction part. Data-driven MOR approaches are of particular
47
+ importance when access to high-fidelity models is not explicitly granted. This means that a state-
48
+ space formulation with access to internal variables is not available, yet input/output data are. Such
49
+ methods circumvent the need to access an exact description of the original model and are applicable
50
+ whenever classical projection-based MOR is not. Here, we mention system and control methodologies
51
+ that are based on interpolation or least-square fit of data (i.e., frequency response measurements),
52
+ such as vector fitting [31], the Loewner framework [41], or the AAA algorithm [42]. Methods that use
53
+ time-domain data are also of interest, including the ones that require input-output data together with
54
+ those which use snapshot data (access to the state evolution), such as the classical ones in [36,58,59],
55
+ followed by [8], [47] or [54].
56
+ We focus on interpolation-based or so-called moment matching (MM) methods which have emerged,
57
+ were developed, and improved continuously in the last decades. The backbone of such methods rep-
58
+ resent rational Krylov-type approaches together with the Sylvester matrix equation interpretation
59
+ [1, 11, 20]. Apart from being computationally efficient and easy to implement, MM approaches have
60
+ another advantage: they do not (necessarily) require access to a full-state realization of the original
61
+ dynamical system. Hence, they can be viewed as data-driven methods. Here data are given by the
62
+ moments of the system, i.e., samples of the underlying transfer function of the system (and of its
63
+ derivatives) evaluated in a particular frequency range; for more details, we refer the readers to [5,8,41]
64
+ and to Chapter 3 in [13]. The notion of a moment with respect to systems and control theory is related
65
+ to the unique solution of a Sylvester matrix equation [20].
66
+ The purpose of this note is twofold; first, we intend to review and to connect three important system
67
+ theoretical model reduction methods based on interpolation that were introduced in the last 15 years:
68
+ • The Loewner framework (LF) by Mayo and Antoulas from 2007 in [41];
69
+ • The Astolfi framework (AF) from 2010 in [8];
70
+ • The Adaptive Antoulas Anderson (AAA) algorithm by Nakatsukasa, Set´e and Trefethen from
71
+ 2018 in [42].
72
+ Together, these three approaches were cited multiple times in various research publications, being
73
+ arguably quite popular methods. However, until now, not too many connections between them were
74
+ provided, neither in the automatic control, nor in the model reduction, or numerical analysis commu-
75
+ nities. Together with the vector fitting algorithm (VF) in [31] (which is not based on interpolation,
76
+ and is hence a purely optimization approach based on least-squares fitting), these methods repre-
77
+ sent arguably the most prolific rational approximation schemes developed in the system and control
78
+ community. However, VF is not the object of this study since it is not based on interpolation.
79
+ The other scope of this note is to propose a new method that is based on the three methods
80
+ enumerated above, and that addresses some of the shortcomings and challenges associated with them.
81
+ Basically, the idea is to combine the attributes of each method, by following the steps below.
82
+ • We make use of the order-revealing property of the LF (encoded by the rank of augmented
83
+ Loewner matrices); additionally, the selection of interpolation points is done via a Loewner-CUR
84
+ technique proposed in [38].
85
+ • We utilize the elegant state-space parameterization of the LTI system proposed by the AF (after
86
+ imposing k interpolation conditions); this is the backbone of the methods (we also show the
87
+ connection between state-space forms and barycentric forms).
88
+ • We use either the fitting step from AAA (that chooses free parameters to fit the un-interpolated
89
+ data in a least square sense) or we impose pole placing (dominant poles are selected from those
90
+ of the Loewner model); in both cases, a linear system of equations needs to be solved.
91
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
92
+ 2023-01-13
93
+
94
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
95
+ 3
96
+ In what follows, we consider a multiple-input multiple-output (MIMO) linear time-invariant (LTI)
97
+ system ΣL of dimension n described by the following system of differential equations:
98
+ ΣL :
99
+
100
+ ˙x(t) = Ax(t) + Bu(t),
101
+ y(t) = Cx(t),
102
+ (1)
103
+ with x(t) ∈ Rn as the state variable, u(t) ∈ Rm as the control inputs, and y(t) ∈ Rp as the observed
104
+ outputs. Here, we have that A ∈ Cn×n, B ∈ Cn×m and C ∈ Cp×n. The transfer (matrix) function of
105
+ the LTI system is given by H(s) ∈ Cp×n, with s ∈ C, as
106
+ H(s) = C(sIn − A)−1B.
107
+ (2)
108
+ It is to be noted that, for m = p = 1, the system becomes single-input single-output (SISO). We will
109
+ sometime switch between MIMO and SISO formats while presenting the methods covered in this note,
110
+ since the latter allows a more easy exposition for some of the results shown here.
111
+ Let si ∈ C \ σ(A), where σ(A) denotes the spectrum of matrix A ∈ Cn×n, i.e., the set of its
112
+ eigenvalues.
113
+ The j-moment of system ΣL at si is given by ηj(si) =
114
+ (−1)j
115
+ j!
116
+
117
+ dj
118
+ dsj H(s)
119
+
120
+ s=si, for any
121
+ integer j ⩾ 1. The 0-moment is obtained by sampling the transfer function H(s) in (2) at si, i.e.,
122
+ η0 = H(si). In this contribution, we restrict the analysis to matching 0-moments, i.e., samples of
123
+ the transfer function H(s), and not of its derivatives. However, all methodologies shown here can be
124
+ expected to cope with this as well. Moreover, in practice, inferring 0-moments from time-domain data
125
+ is usually a more straight-forward task; this is performed by exciting the system with harmonic inputs,
126
+ and by applying spectral transformations to the outputs. Additionally, the inference of derivative
127
+ values (of the transfer function) is typically susceptible to perturbations and it is more challenging to
128
+ attain, from a numerical point of view.
129
+ The paper is structured in the following way; after the introduction session sets up the stage, we
130
+ propose a survey of three established interpolation-based methods in Section 2. Then, the proposed
131
+ methodologies are developed in Section 3, with emphasis on the one-step approach that combines
132
+ optimal selection of interpolation points (chosen using CUR-DEIM) with LS fit on the rest of the
133
+ data, and also the pole placement method in barycentric form that enforces dominant poles from the
134
+ Loewner data-driven model. Then, Section 4 illustrates the numerical aspects of applying the methods
135
+ discussed/proposed in the previous two sections to a variety of test cases (various models and data
136
+ sets). Finally, Section 5 presents the conclusions and the outlook into future research.
137
+ 2. A survey of established methods
138
+ In this section we discuss three established data-driven methods for rational approximation (AF, LF,
139
+ and AAA, as mentioned in the previous section).
140
+ The data are samples of the transfer function
141
+ corresponding to the underlying dynamical system, measured on a particular frequency grid. In what
142
+ follows, we mention some state-of-the-art methodologies used to measure such data, i.e., frequency
143
+ response data. Typically, such measurements can be produced in practice from experiments conducted
144
+ in scientific laboratories using carefully calibrated machines, called spectrum analyzers (SAs). In this
145
+ category we mention swept-tuned spectrum analyzers, scalar network analyzers (SNAs), and vector
146
+ network analyzers (VNAs).
147
+ The SNA is an instrument that measures microwave signals by converting them to a DC voltage
148
+ using a diode detector. In a VNA, information regarding both the magnitude and the phase of a
149
+ microwave signal is extracted.
150
+ While there are different ways to perform such measurements, the
151
+ method employed by commercial products (such as the Anritsu series described in [18]) of VNAs is
152
+ to down-convert the signal to a lower intermediate frequency in a process called harmonic sampling.
153
+ This signal can then be measured directly by a tuned receiver. Compared to the SNA, the VNA is
154
+ a more powerful analyzer tool. The major difference is that the VNA can also measure the phase,
155
+ and not only the amplitude. With this property enforced, then so-called scattering parameters (or
156
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
157
+ 2023-01-13
158
+
159
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
160
+ 4
161
+ S-parameters) can be processed. These can be used for identifying forward and reverse transmission
162
+ and reflection characteristics. More details can be found in [18].
163
+ The harmonic balance method (or HBM) [43], is an established methodology in the field of elec-
164
+ tromagnetics. The HBM is used in many (if not most) commercial radio-frequency (RF) simulation
165
+ tools. This is due to the fact that it has certain advantages over other common methods, namely
166
+ modified nodal analysis (MNA), which makes it more appropriate to use for stiff problems and circuits
167
+ containing transmission lines, nonlinearities and dispersive effects. More details can be found in the
168
+ survey paper [49].
169
+ 2.1. The one-sided moment-matching approach in [8]
170
+ The framework introduced by Astolfi in [8] (referred to as AF throughout the paper) deals with the
171
+ problem of model reduction by moment matching. Although classically interpreted as a problem of
172
+ interpolation of points in the complex plane, it has instead been recast as a problem of interpolation
173
+ of steady-state responses. In the following we briefly review its application to linear systems. It is to
174
+ be noted that the AF was steadily extended and applied to different scenarios (including nonlinear
175
+ dynamical systems, pole-zero placement, and least-squares fit) [33–35,45,53,55].
176
+ The moments of a linear system can be characterized in terms of the solution of Sylvester equations.
177
+ By using this observation, it has been shown that the moments are in one-to-one relation with the
178
+ steady-state output response of the interconnection between a signal generator and the original linear
179
+ system.
180
+ In what follows, for simplicity of exposition, it is assumed that ΣL is a minimal system (both fully
181
+ controllable and fully observable). For exact definitions on minimality, controllability, or observability
182
+ of LTI systems, we refer the reader to [1].
183
+ Let k ⩽ n and S ∈ Ck×k a non-derogatory matrix (for which the characteristic and minimal
184
+ polynomials coincide) with σ(S) ∩ σ(A) = ∅ and R ∈ C1×k so that (S, R) is observable. Consider the
185
+ signal generator system Σsg described by the equations
186
+ Σsg :
187
+
188
+ ˙ω(t) = Sω(t),
189
+ u(t) = Rω(t).
190
+ (3)
191
+ Then, the explicit solution of (3) can be written as ω(t) = eStω(0). Hence, the control input is written
192
+ as u(t) = ReStω(0). In addition, the eigenvalues of S are called interpolation points.
193
+ For a linear system ΣL, and interpolation points si ∈ C \ σ(A), for i = 1, . . . , k, consider a non-
194
+ derogatory matrix S ∈ Rk×k. It follows that there exists a one-to-one relation between the moments
195
+ of the system ΣL and
196
+ 1. the matrix CΠ, where Π is the (unique) solution of the Sylvester equation AΠ + BR = ΠS,
197
+ for any row vector R ∈ R1×k so that (R, S) is observable;
198
+ 2. the steady-state response of the output y of the interconnection of system ΣL and the system
199
+ Σsg, for any R and ω(0) such that the triplet (R, S, ω(0)) is minimal.
200
+ More precisely, let ∆ ∈ Rk be a column vector containing k free parameters (denoted here by
201
+ δ1, δ2, . . . , δk, with δi ̸= 0, 1 ≤ i ≤ k). Then, as stated in [8], the family of linear time-invariant
202
+ systems that interpolates the moments of system ΣL at the eigenvalues of matrix S, is given by
203
+ ˆΣ∆ :
204
+ � ˙ˆx(t) = (S − ∆R)
205
+
206
+ ��
207
+
208
+ = ˆA
209
+ ˆx(t) + ∆
210
+ ����
211
+ = ˆB
212
+ u(t),
213
+ ˆy(t) = CΠ
214
+ ����
215
+ = ˆC
216
+ ˆx(t),
217
+ (4)
218
+ where the matrices S and R are as before and Π is the unique solution of the Sylvester equation
219
+ AΠ + BR = ΠS. Additionally, the condition σ(S) ∩ σ(S − ∆R) = ∅ needs to be enforced. It is
220
+ to be noted that the free parameters explicitly enter the vector ˆB = ∆, but also the matrix ˆA, as
221
+ ˆA = S − ∆R. Finally, ˆC = CΠ has fixed entries.
222
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
223
+ 2023-01-13
224
+
225
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
226
+ 5
227
+ The Sylvester matrix equation for the reduced-order system is written as ˆA ˆΠ+ ˆBR = ˆΠS. This can
228
+ be explained by the fact that the reduced-order system matches the prescribed moments of the original
229
+ system, hence the same format of the two equations. Without loss of generality, one can consider
230
+ that the matrix ˆΠ is the identity matrix, i.e.
231
+ ˆΠ = Ik (this can be achieved by applying similarity
232
+ transformations). By replacing this value into the reduced-dimension Sylvester matrix equation above,
233
+ the formula ˆA = S − ∆R directly follows.
234
+ Afterwards, the free parameters collected in the vector ∆ can be chosen in order to enforce or impose
235
+ additional conditions as mentioned in [8], such as: matching with imposing additional k interpolation
236
+ conditions, matching with prescribed eigenvalues, matching with prescribed relative degree, matching
237
+ with prescribed zeros, matching with a passivity constraint, matching with L2-gain, or matching with
238
+ a compartmental constraint.
239
+ An important aspect of the AF is the characterization of all, i.e., infinitely many families of reduced-
240
+ order models that satisfy k prescribed interpolation conditions. This is done by explicitly computing
241
+ such parameterized models, for which the free parameters are the variables entering in the vector ∆.
242
+ The main parameterization developed here will be used as a “backbone” of the methods developed in
243
+ Section 3.
244
+ As stated in the original paper, the main advantage of the AF (characterization of moments in
245
+ terms of steady-state responses) is that it allows the definition of moments for systems which do not
246
+ admit a clear/immediate representation in terms of transfer function(s). Hence, the author provides
247
+ as examples the case of linear time-varying systems, and the case of nonlinear systems. Moreover,
248
+ it is stated in [8] that one disadvantage of the framework is that it requires the existence of steady-
249
+ state responses. Consequently, the original system has to be (exponentially) stable. However, in most
250
+ practical applications, this is a realistic requirement.
251
+ 2.2. The Loewner framework in [41]
252
+ In this section we present a short summary of the Loewner framework (LF), as introduced in [41]. It
253
+ is to mentioned that LF has its roots in the earlier work of [4], and that LF can be considered to be a
254
+ double-sided moment-matching approach (as opposed to AF, which is one-sided).
255
+ For a tutorial paper on LF for LTI systems, we refer the reader to [7], and for a recent extension
256
+ that uses time-domain data, we refer the reader to [48]. The Loewner framework has been recently
257
+ extended to certain classes of nonlinear systems, such as bilinear systems in [6], and quadratic-bilinear
258
+ (QB) systems in [24], but also to linear parameter-varying systems in [28]. Additionally, issues such
259
+ as stability preservation or enforcement, or passivity preservation, were tackled in the LF in [23, 29],
260
+ for the former, and in [2,12], for the latter.
261
+ The LF is based on processing frequency-domain measurements D = {(ωℓ, H(ωℓ)) , ℓ = 1, . . . , N}
262
+ (with ωℓ ∈ R for 1 ≤ ℓ ≤ N) corresponding to evaluations of the transfer function of the underlying
263
+ (unknown/hidden) dynamical system.
264
+ The interpolation problem is formulated as shown below (for convenience of exposition, we show
265
+ here only the SISO formulation). We are given data nodes and data points in the set D, partitioned
266
+ into two disjoint subsets DL and DR, with DL ∪ DR = D and k + q = N, as
267
+ right data : DL = {(λj, H(λj)) , j = 1, . . . , k}, and,
268
+ left data : DR = {(µi, H(µi)) , i = 1, . . . , q},
269
+ (5)
270
+ and we seek to find a rational function ˆH(s), such that the following interpolation conditions hold:
271
+ ˆH(µi) = H(µi) := vi,
272
+ ˆH(λj) = H(λj) := wj.
273
+ (6)
274
+ The Loewner matrix L ∈ Cq×k and the shifted Loewner matrix Ls ∈ Cq×k play an important role in
275
+ the LF, and are given by
276
+ L(i,j) = vi − wj
277
+ µi − λj
278
+ ,
279
+ Ls(i,j) = µivi − λjwj
280
+ µi − λj
281
+ ,
282
+ (7)
283
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
284
+ 2023-01-13
285
+
286
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
287
+ 6
288
+ while the data vectors V ∈ Cq, WT ∈ Ck are given by
289
+ V(i) = vi,
290
+ W(j) = wj, for i = 1, . . . , q, j = 1, . . . , k.
291
+ (8)
292
+ Moreover, the following Sylvester matrix equations ([1, Ch. 6]) are satisfied by the Loewner and shifted
293
+ Loewner matrices (here, 1q =
294
+
295
+ 1
296
+ · · ·
297
+ 1
298
+ �T ∈ Cq)
299
+
300
+ ML − LΛ = V1T
301
+ k − 1qW,
302
+ MLs − LsΛ = MV1T
303
+ k − 1qWΛ,
304
+ (9)
305
+ where M = diag(µ1, . . . , µq) and Λ = diag(λ1, . . . , λk) are diagonal matrices. The following relation
306
+ holds true
307
+ Ls = LΛ + V1T
308
+ k = ML + 1qW.
309
+ (10)
310
+ The unprocessed Loewner surrogate model, provided that k = q, is composed of the matrices
311
+ ˆE = −L,
312
+ ˆA = −Ls,
313
+ ˆB = V,
314
+ ˆC = W,
315
+ (11)
316
+ and if the pencil (L, Ls) is regular, then the function ˆH(s) satisfying the interpolation conditions in
317
+ (6) can be explicitly computed in terms of the matrices in (11), as ˆH(s) = ˆC(sˆE − ˆA)−1 ˆB.
318
+ In practical applications (when processing a fairly large number of measurements), the pencil (Ls, L)
319
+ is often singular. Hence, a post-processing step is required for the Loewner model in (11). In such
320
+ cases, one needs to perform a singular value decomposition (SVD) of augmented Loewner matrices, to
321
+ extract the dominant features and remove inherent redundancies in the data. By doing so, projection
322
+ matrices X, Y ∈ Ck×r are obtained, as left, and respectively, right truncated singular vector matrices:
323
+ [L Ls] = YS(1)
324
+ r
325
+ ˜X
326
+ H � L
327
+ Ls
328
+
329
+ = ˜YS(2)
330
+ r XH,
331
+ (12)
332
+ where S(1)
333
+ r , S(2)
334
+ r
335
+ ∈ Rr×r,
336
+ Y ∈ Ck×r, X ∈ Cq×r, ˜Y ∈ C2q×r, ˜X ∈ Cr×2k. The truncation index r
337
+ can be chosen as the numerical rank (based on a tolerance value τ > 0) or as the exact rank of the
338
+ Loewner pencil (in exact arithmetic), depending on the application and data size. More details can be
339
+ found in [7].
340
+ The system matrices corresponding to a projected Loewner model of dimension r can be computed
341
+ as follows:
342
+ ˜E = −XHLY,
343
+ ˜A = −XHLsY,
344
+ ˜B = XHV,
345
+ ˜C = WY.
346
+ We note that MIMO extensions of the LF were already proposed in the original contribution [41].
347
+ There, a tangential interpolation framework is considered. Instead of imposing interpolation of full
348
+ p × m blocks, the authors prefer to interpolate the original transfer matrix function samples along
349
+ certain vectors (or tangential directions). We also note that a first attempt of re-interpreting the LF
350
+ in [41] as a one-sided method was made in [25]. In the latter, the main difference to the classical work
351
+ in [4] was that a compression of the left (un-interpolated) data set was enforced. However, in [25], it
352
+ was still unclear how to split the data, i.e., what the right data set should be (where interpolation is
353
+ enforced). Finally, it is to be noted that the choice of interpolation points is crucial in the LF. An
354
+ exhaustive study of different choices was proposed in [37], while a greedy strategy was proposed in
355
+ [17], for scenarios in which limited experimental data are available.
356
+ 2.3. The AAA algorithm in [42]
357
+ The AAA algorithm introduced in [42] is an adaptive and iterative extension of the interpolation
358
+ method based on Loewner matrices, originally proposed in [4]. The main steps are as follows
359
+ 1. Express the fitted rational approximants in a barycentric representation, which represents a
360
+ numerically stable way of expressing rational functions [15].
361
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
362
+ 2023-01-13
363
+
364
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
365
+ 7
366
+ Algorithm 1 The AAA algorithm.
367
+ Require: A (discrete) set of sample points Γ ⊂ C with N points, function f (or the evaluations of f
368
+ on the set Γ, i.e., the sample values), and an error tolerance ϵ > 0.
369
+ Ensure: A rational approximant rn(s) of order (n, n) displayed in a barycentric form.
370
+ 1: Initialize j = 0, Γ(0) ← Γ, and r−1 ← N −1 �N
371
+ i=1 f(γi).
372
+ 2: while |f(s) − rj−1(s)| > ϵ do
373
+ 3:
374
+ Select a point zj ∈ Γ(j) for which |f(s) − rj−1(s)| attains a maximal value, where for j ≥ 1, it
375
+ follows:
376
+ rj−1(s) :=
377
+ �j−1
378
+
379
+ k=0
380
+ ω(j−1)
381
+ k
382
+ s − zk
383
+ �−1 �j−1
384
+
385
+ k=0
386
+ ω(j−1)
387
+ k
388
+ fk
389
+ s − zk
390
+
391
+ .
392
+ (13)
393
+ 4:
394
+ if |f(zj) − rj−1(zj)| ≤ ε then
395
+ 5:
396
+ Return rj−1.
397
+ 6:
398
+ else
399
+ 7:
400
+ fj ← f(zj) and Γ(j+1) ← Γ(j) \ {zj}.
401
+ 8:
402
+ end if
403
+ 9:
404
+ Find the weights ω(j) = [ω(j)
405
+ 0 , . . . , ω(j)
406
+ j ] by solving a least squares problem over z ∈ Γ(j+1)
407
+ j
408
+
409
+ k=0
410
+ ω(j)
411
+ k
412
+ s − zk
413
+ f(s) ≈
414
+ j
415
+
416
+ k=0
417
+ ω(j)
418
+ k fk
419
+ s − zk
420
+
421
+
422
+ j
423
+
424
+ k=0
425
+ f(s) − fk
426
+ s − zk
427
+
428
+ ω(j)
429
+ k
430
+ ≈ 0 ⇔ L(j)ω(j) = 0.
431
+ (14)
432
+ The solution of (14) is given by the (j + 1)th right singular vector of the Loewner matrix
433
+ L(j) ∈ C(N−j−1)×(j+1).
434
+ 10:
435
+ j ← j + 1.
436
+ 11: end while
437
+ 2. Select the next interpolation (support) points via a greedy scheme; basically, interpolation is
438
+ enforced at the point where the (absolute or relative) error at the previous step is maximal.
439
+ 3. Compute the other variables (the so-called barycentric weights) in order to enforce least squares
440
+ approximation on the non-interpolated data.
441
+ In recent years, the AAA algorithm has proven to be an accurate, fast, and reliable rational ap-
442
+ proximation tool with a fairly large range of applications. Here, we will mention only a few: nonlinear
443
+ eigenvalue problems [39], MOR of parameterized linear dynamical systems [16], MOR of linear sys-
444
+ tems with quadratic outputs [26], rational approximation of periodic functions [10], representation of
445
+ conformal maps [22], rational approximation of matrix-valued functions [27], or signal processing with
446
+ trigonometric rational functions [60]. The procedure is sketched in Algorithm 1.
447
+ It is to be mentioned that a modified version of AAA that enforces real-valued and strictly-
448
+ proper rational appoximants was recently proposed in [30]. There, the format of the function in (13)
449
+ was modified by inserting a 1 into the denominator, as follows
450
+ ˜rj(s) :=
451
+
452
+ 1 +
453
+ j−1
454
+
455
+ k=0
456
+ ω(j−1)
457
+ k
458
+ s − zk
459
+ �−1 �j−1
460
+
461
+ k=0
462
+ ω(j−1)
463
+ k
464
+ fk
465
+ s − zk
466
+
467
+ .
468
+ (15)
469
+ Consequently, the equation in (14) becomes L(j)ω(j−1) = −f(j−1), where the vector f(j−1) ∈ Cj is
470
+ given by f(j−1) =
471
+ �f0
472
+ f2
473
+ · · ·
474
+ fj−1
475
+ �T. It is to be noted that ˜rj(s) in (15) is theoretically a rational
476
+ approximant of order (j − 1, j), if we do not take into account pole/zero cancellations or any other
477
+ zero cancellations of coefficients in the numerator or denominator.
478
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
479
+ 2023-01-13
480
+
481
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
482
+ 8
483
+ 3. The proposed methodologies
484
+ 3.1. Skeleton of the main methods
485
+ Similar to the methods reviewed in Section 2 we want to find an LTI system with a transfer function of
486
+ the structure (1) that interpolates data provided as measurements H (si) , i = 1, . . . , k of the transfer
487
+ function of the original system. We can directly put together an LTI parametrized model of dimension
488
+ r = km, having km2 degrees of freedom with transfer function
489
+ ˆH(s) = ˆC(sIr − ˆA)−1 ˆB,
490
+ (16)
491
+ with the underlying data concatenated to
492
+ ˆC =
493
+ �H(λ1)
494
+ · · ·
495
+ H(λk)�
496
+ ∈ Cp×r,
497
+ (17)
498
+ a matrix of weights ˆ
499
+ Wi
500
+ ˆB =
501
+
502
+ ˆ
503
+ W
504
+ H
505
+ 1
506
+ · · ·
507
+ ˆ
508
+ W
509
+ H
510
+ k
511
+ �H
512
+ ∈ Cr×m,
513
+ (18)
514
+ and ˆA ∈ Cr×r formed from a diagonal matrix populated with the interpolation points λi disturbed by
515
+ ˆB, such that
516
+ ˆA = Λ − ˆBR = diag (λ1, . . . , λk) ⊗ Im − ˆB
517
+
518
+ 1T
519
+ r ⊗ Im
520
+
521
+ .
522
+ (19)
523
+ Making use of the Woodbury matrix identity and denoting Λs = sIkm − Λ, the transfer function (16)
524
+ can be rewritten as
525
+ ˆH(s) = ˆCΛ−1
526
+ s
527
+ ˆB
528
+
529
+ Im + RΛ−1
530
+ s
531
+ ˆB
532
+ �−1
533
+ .
534
+ (20)
535
+ A complete derivation of (20) is given in Appendix A.1.
536
+ In the single-input single-output case (m = p = 1, hence r = k), the barycentric weights reduce to
537
+ scalars and the matrices for a ROM of structure (16) are given by
538
+ ˆA = Λ − ˆBR ∈ Ck×k,
539
+ ˆB =
540
+ � ˆw1
541
+ · · ·
542
+ ˆwk
543
+ �T ∈ Ck×1,
544
+ ˆC =
545
+ �H (λ1)
546
+ · · ·
547
+ H (λk)�
548
+ ∈ C1×k.
549
+ (21)
550
+ By inserting the formulae in (21) into (20), and using the notation hi := H (λi), leads to
551
+ ˆCΛ−1
552
+ s
553
+ ˆB =
554
+ k
555
+
556
+ i=1
557
+ ˆwihi
558
+ s − λi
559
+ ,
560
+
561
+ Im + RΛ−1
562
+ s
563
+ ˆB
564
+ �−1
565
+ =
566
+ 1
567
+ 1 + �k
568
+ i=1
569
+ ˆwi
570
+ s − λi
571
+ .
572
+ (22)
573
+ Hence, the transfer function of the model in (21) is given in barycentric representation by
574
+ ˆH(s) =
575
+ �k
576
+ i=1
577
+ ˆwihi
578
+ s − λi
579
+ 1 + �k
580
+ i=1
581
+ ˆwi
582
+ s − λi
583
+ .
584
+ (23)
585
+ This can be performed analogously for a multi-input multi-output case (m = p, r = km). The first
586
+ part of (20) becomes
587
+ ˆCΛ−1
588
+ s
589
+ ˆB =
590
+ �H(λ1)Im(s − λ1)−1
591
+ · · ·
592
+ H(λk)Im(s − λk)−1�
593
+
594
+ ��
595
+ ˆ
596
+ W1
597
+ ...
598
+ ˆ
599
+ Wk
600
+
601
+ �� =
602
+ k
603
+
604
+ i=1
605
+ H(λi) ˆ
606
+ Wi
607
+ s − λi
608
+ ,
609
+ (24)
610
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
611
+ 2023-01-13
612
+
613
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
614
+ 9
615
+ the second part
616
+
617
+ Im + RΛ−1
618
+ s
619
+ ˆB
620
+ �−1
621
+ =
622
+
623
+
624
+
625
+ �Im +
626
+ �Im
627
+ · · ·
628
+ Im
629
+
630
+
631
+ ��
632
+ Im(s − λ1)−1
633
+ ...
634
+ Im(s − λk)−1
635
+
636
+ ��
637
+ −1 �
638
+ ��
639
+ ˆ
640
+ W1
641
+ ...
642
+ ˆ
643
+ Wk
644
+
645
+ ��
646
+
647
+
648
+
649
+
650
+ −1
651
+ =
652
+
653
+ Im +
654
+ k
655
+
656
+ i=1
657
+ ˆ
658
+ Wi
659
+ s − λi
660
+ �−1
661
+ .
662
+ (25)
663
+ Consequently, the transfer function in (20) has also a barycentric form given by
664
+ ˆH(s) =
665
+ � k
666
+
667
+ i=1
668
+ H(λi) ˆ
669
+ Wi
670
+ s − λi
671
+ � �
672
+ Im +
673
+ k
674
+
675
+ i=1
676
+ ˆ
677
+ Wi
678
+ s − λi
679
+ �−1
680
+ .
681
+ (26)
682
+ The transfer function is defined by the choice of the interpolation points and of the weights. The
683
+ interpolation points can be chosen as dominant parts of the available data or based on their location in
684
+ the frequency spectrum. The weights can be computed such that the data which are not interpolated,
685
+ are approximated in an optimal way. Alternatively, the weights can be chosen to enforce poles at
686
+ specific locations. In the following, we show different strategies for both choices.
687
+ 3.2. Automatic choice of interpolation points
688
+ The approximation quality of a surrogate model of the form (16) is greatly influenced by the choice of
689
+ the interpolation points λ. This choice is not always obvious, so automatic strategies are frequently
690
+ employed. The Loewner framework uses the SVD to identify dominant subsets of the available data
691
+ to enforce interpolation on. Alternatively, the AAA algorithm uses a greedy scheme to minimize the
692
+ error between surrogate and original data. Another approach, originally introduced by [37], makes use
693
+ of the CUR decomposition to extract interpolation points from a relevant subset of the available data.
694
+ The CUR decomposition approximates a matrix A by a product of three low-rank matrices ˇA =
695
+ ˇC ˇU ˇR, where ˇC and ˇR represent subsets of the columns respectively rows of A [40,56]. In our case the
696
+ three matrices are only a byproduct, we are more interested in the interpolation points λ and µ that
697
+ are associated to the columns and rows extracted as ˇC and ˇR. In combination with the skeleton for a
698
+ realization described in Section 3.1, Algorithm 2 computes a surrogate model approximating a set of
699
+ given transfer function data. We use the algorithm from [56] to compute the CUR decomposition and
700
+ thus identify dominant parts of the original data set and their corresponding left and right interpolation
701
+ points. Contrary to [37], we decompose the original Loewner matrix L rather than the augmented
702
+ Loewner matrices
703
+ �L
704
+ Ls
705
+
706
+ and
707
+
708
+ LH
709
+ LH
710
+ s
711
+ �H. Using all interpolation points obtained from the CUR
712
+ decomposition would introduce redundant data into the surrogate. Therefore we choose only a subset
713
+ of the interpolation points: either only the left points, only the right points, or every other entry from
714
+ a concatenated and sorted vector of left and right points. Together with the data associated to the
715
+ chosen interpolation points they are used to populate a rectangular Loewner matrix. We now need
716
+ to compute weights for barycentric interpolation as described in the following section. After having
717
+ obtained the weights, a surrogate model (16) can be computed from (17)–(19).
718
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
719
+ 2023-01-13
720
+
721
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
722
+ 10
723
+ Algorithm 2 LS-Loewner with CUR.
724
+ Require: Transfer function samples {H (si)}N
725
+ i=1, corresponding sampling points Ξ = {si}N
726
+ i=1.
727
+ Ensure: Surrogate model ˆH(s) = ˆC(sIr − ˆA)−1 ˆB.
728
+ 1: Partition data and compute Loewner matrix L as in (7).
729
+ 2: Compute CUR decomposition, such that L = ˇC ˇU ˇR with ˇC ∈ CN×k, ˇR ∈ Ck×N.
730
+ 3: Obtain interpolation points {λi}k
731
+ i=1 , {µi}k
732
+ i=1 corresponding to the columns and rows in ˇC, ˇR.
733
+ 4: Postprocess interpolation points to obtain ν = {ν}k
734
+ i=1 and χ = Ξ \ ν.
735
+ 5: Populate a rectangular Loewner matrix L(i,j) = H(χi)−H(νj)
736
+ χi−νj
737
+ .
738
+ 6: Compute the weights Ω = −L† �
739
+ H (ν1)H
740
+ · · ·
741
+ H (νk)H�H
742
+ , where L† is the pseudo-inverse of L and
743
+ Ω =
744
+
745
+ ˆ
746
+ W
747
+ H
748
+ 1
749
+ · · ·
750
+ ˆ
751
+ W
752
+ H
753
+ k
754
+ �H
755
+ .
756
+ 7: Compute ˆA, ˆB, ˆC with (17)–(19).
757
+ 3.3. Computing the barycentric weights
758
+ 3.3.1. Least-squares approach
759
+ The matrix-valued weights ˆ
760
+ Wi can be computed similarly to AAA [27] by solving the minimization
761
+ problem
762
+ min
763
+ ˆ
764
+ Wi
765
+ h
766
+
767
+ j=1
768
+
769
+
770
+ � k
771
+
772
+ i=1
773
+ H(λi) ˆ
774
+ Wi
775
+ sj − λi
776
+ � �
777
+ Im +
778
+ k
779
+
780
+ i=1
781
+ ˆ
782
+ Wi
783
+ sj − λi
784
+ �−1
785
+ − H(sj)
786
+
787
+
788
+ 2
789
+ .
790
+ (27)
791
+ This solution can, for example, be obtained from an optimization in least-squares sense. The weights
792
+ for the SISO case are computed analogously. Here, the matrix-values weights and transfer function
793
+ values reduce to scalars.
794
+ 3.3.2. Pole placement
795
+ The next step would be to take advantage of the degrees of freedom in the vector ˆB from (21), so
796
+ that the ROM thus constructed has particular (stable) poles [21, 35, 46]. These will be denoted with
797
+ ζ1, ζ2, . . . , ζk. The following derivations assume a SISO model. To enforce that this happens, we need
798
+ to make sure that the matrix ζjIk − ˆA loses rank for all 1 ≤ j ≤ k. In what follows, we show how to
799
+ enforce this property in an elegant, straightforward way. Remember that the transfer function of the
800
+ parameterized AF model is given by:
801
+ ˆH(s) =
802
+ �k
803
+ i=1
804
+ ˆwihi
805
+ s − λi
806
+ 1 + �k
807
+ i=1
808
+ ˆwi
809
+ s − λi
810
+ = N(s)
811
+ D(s).
812
+ (28)
813
+ Now, let’s say we would like this transfer function to have k poles at the selected values ζj’s. Clearly,
814
+ the condition is D(ζj) = 0 and hence we need to enforce:
815
+ 1 +
816
+ k
817
+
818
+ i=1
819
+ ˆwi
820
+ ζj − λi
821
+ = 0, ∀1 ≤ j ≤ k ⇔ Cζ,λ ˆB = −1k ⇔ ˆB = −C−1
822
+ ζ,λ1k,
823
+ (29)
824
+ where Cζ,λ is a Cauchy matrix defined by: (Cζ,λ)i,j =
825
+ 1
826
+ ζi−λj . Details on how to obtain the above
827
+ expression by following the procedure in [3] are given in Appendix A.2. We note that placing poles
828
+ is a difficult numerical problem which requires the inversion of a Cauchy matrix, which is highly
829
+ ill-conditioned, by nature.
830
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
831
+ 2023-01-13
832
+
833
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
834
+ 11
835
+ Algorithm 3 Loewner framework with pole placement (LFPP).
836
+ Require: Transfer function samples {H (si)}N
837
+ i=1, corresponding sampling points Ξ = {si}N
838
+ i=1, loca-
839
+ tions for poles ζ = {ζi}k
840
+ i=1, interpolation points λ = {λi}k
841
+ i=1.
842
+ Ensure: Surrogate model ˆH(s) = ˆC(sIr − ˆA)−1 ˆB.
843
+ 1: Compute ΣD from {H (si)}N
844
+ i=1 and {si}N
845
+ i=1 using the Loewner framework (cf. Section 2.2).
846
+ 2: ˆC ←
847
+ �HD (λ1)
848
+ · · ·
849
+ HD (λk)�
850
+ .
851
+ 3: ˆB ← −C−1
852
+ ζ,λ1r.
853
+ 4: ˆA ← diag (λ1, . . . , λk) − ˆB1T
854
+ k.
855
+ Instead of doing this, we could solve Cζ,λ ˆB = −1r, without inverting the Cauchy matrix explicitly,
856
+ i.e., by solving a linear systems of equations. Algorithm 3 summarizes this procedure in a data-driven
857
+ context. The required underlying model is obtained from a set of transfer function evaluations by
858
+ applying the Loewner framework. The method is illustrated for SISO systems, but can readily be
859
+ extended to the MIMO case.
860
+ 3.4. Automatic choice of poles and interpolation points
861
+ A reasonable choice of poles and interpolation points for Algorithm 3 is not always readily available,
862
+ but the approximation of the surrogate is heavily influenced by this choice. In the following, we show
863
+ an extension to Algorithm 3 which computes a surrogate model (16) without requiring sets of poles
864
+ and interpolation points as input parameters. Algorithm 4 sketches the skeleton of such automatic
865
+ algorithm. Similar to Algorithm 3 it employs the Loewner framework to obtain a realization of a
866
+ surrogate interpolating the provided data.
867
+ Subsequently, a generalized eigendecomposition of the
868
+ Loewner realization of the original data is computed to find suitable locations for poles. From this
869
+ is is possible to compute the dominance of all eigenvalues; for details, see, e.g. [51]. The algorithm
870
+ now chooses the k most dominant eigenvalues as poles to enforce in the surrogate. It should be noted
871
+ that only eigenvalues with negative real parts should be considered, if the stability of the surrogate is
872
+ important. The required interpolation points can now be chosen similar to Algorithm 2 by computing
873
+ a CUR decomposition and using the interpolation points associated to the rows or columns of the
874
+ decomposition as interpolation points for the new surrogate.
875
+ The approximation of dominant poles of the underlying model from data is less robust if the transfer
876
+ function samples are disturbed by noise. This leads to a reduced approximation quality. For a better
877
+ performance if applied to noisy data, Algorithm 4 can be modified as follows: To obtain poles which
878
+ should be enforced, first choose manually the most prominent features in the transfer function, e.g.
879
+ peaks, which should be approximated by the surrogate model.
880
+ Now choose the eigenvalues which
881
+ imaginary parts are closest to the frequencies, where the chosen features of the transfer function are
882
+ located. The CUR decomposition also fails at extracting the most dominant rows and columns of
883
+ the Loewner matrix if noisy data is assessed. Therefore another heuristic is employed to choose the
884
+ interpolation points: Use the value si which corresponds to the lowest amplitude of the transfer function
885
+ between the locations of two enforced poles. This leads to reasonable approximations, especially for
886
+ lightly damped systems. Other approaches include choosing simply the middle between the location
887
+ of two poles or specifying an offset between pole and interpolation point location.
888
+ 4. Numerical results
889
+ In the following, we demonstrate the methods discussed in Section 3 by applying them on three
890
+ benchmark examples available from the MOR-Wiki1:
891
+ 1http://modelreduction.org
892
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
893
+ 2023-01-13
894
+
895
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
896
+ 12
897
+ Algorithm 4 Loewner framework with automatic pole placement (LFaPP).
898
+ Require: Transfer function samples {H (si)}N
899
+ i=1, corresponding sampling points Ξ = {si}N
900
+ i=1.
901
+ Ensure: Surrogate model ˆH(s) = ˆC(sIr − ˆA)−1 ˆB.
902
+ 1: Compute ΣD from {H (si)}N
903
+ i=1 and {si}N
904
+ i=1 using the Loewner framework (cf. Section 2.2).
905
+ 2: Compute the generalized eigenvalue decompositions AX = EXα and YHA = αYHE for the
906
+ matrices of right and left eigenvectors X, Y and the matrix of eigenvalues α = diag (α1, . . . , αn).
907
+ 3: Compute eigenvalue dominance di = |CY(:,i)αiX(:,i)HB|
908
+ |ℜ(αi)|
909
+ , i = 1, . . . , n and sort α accordingly
910
+ 4: Set ζ to the k most dominant eigenvalues.
911
+ 5: Compute CUR decomposition of L.
912
+ 6: Set λ to the k right or left interpolation points corresponding to the CUR decomposition.
913
+ 7: Compute surrogate as in Algorithm 3.
914
+ ISS This system models the structural response of the Russian Service Module of the International
915
+ Space Station (ISS) [52]. The model has n = 270 states, m = 3 inputs, and p = 3 outputs. The
916
+ dataset used for the computations contains transfer function measurements at 400 logarithmically
917
+ distributed points in the range
918
+
919
+ 10−1, 102�
920
+ · ı. The model is also part of the SLICOT benchmark
921
+ collection [44].
922
+ Flexible aircraft This system models lift and drag along the flexible wing of an aircraft. The system
923
+ matrices are not available, we only have access to a dataset of 420 transfer functions samples
924
+ at linearly distributed frequencies between 0.1 and 42.0 Hz. The original dataset has one input
925
+ (the gust disturbance) and 92 outputs. For the following experiments, we choose the 91st output
926
+ which corresponds to the first flexible mode [50]. The dataset is available from [57].
927
+ Sound transmission This system models the sound transmission through a system of two brass plates
928
+ with an air enclosure between them. The transfer function measures the sound pressure in an
929
+ adjacent acoustic cavity. The geometry is based on [32]; the data—transfer function evaluations
930
+ at 1000 linearly-distributed frequency values between 1 and 1000 Hz—is available from [9].
931
+ We note that no tangential interpolation (as described in [41]) is applied for the MIMO model.
932
+ Instead, the Loewner matrices are constructed in a block-wise manner. The case of tangential inter-
933
+ polation, within the proposed approaches in this note, will be investigated in future works.
934
+ We enforce realness of all surrogate models (all matrices contain only real entries) by applying the
935
+ transformation described in [7]. For this, all data must be available in complex conjugate pairs. The
936
+ required transformation matrix is given by
937
+ J = Iℓ ⊗
938
+ � 1
939
+
940
+ 2
941
+ � Im
942
+ Im
943
+ −ıIm
944
+ ıIm
945
+ ��
946
+ ,
947
+ (30)
948
+ with ℓ = k
949
+ 2 and the real-valued quantities are obtained from ˆA
950
+ (ℜ) = J ˆAJH, ˆB
951
+ (ℜ) = J ˆB, and ˆC
952
+ (ℜ) =
953
+ ˆCJH.
954
+ For some of the experiments we add artificial noise to the measurements, in order to obtain perturbed
955
+ data. The modified measurements are given by
956
+ ˇH (si) = H (si) (1 + Zi) , i = 1, . . . , n,
957
+ (31)
958
+ where Zi ∈ C is the ith sample drawn from a set of random numbers Z ∼ CN
959
+
960
+ µ, σ2�
961
+ following a
962
+ complex normal distribution with mean µ and standard deviation σ2. Here, the real and imaginary
963
+ parts of Z are independent normally distributed variables [19].
964
+ We assess the approximation error of the surrogate models with an approximated L∞ norm, because
965
+ many surrogates have unstable poles and hence, the H∞ can not be computed. For a given reduced
966
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
967
+ 2023-01-13
968
+
969
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
970
+ 13
971
+ order r, the L∞ error in the considered frequency range ω ∈ [ωmin, ωmax] is approximated by
972
+ ε(r) =
973
+ max
974
+ ω∈[ωmin,ωmax]
975
+ ���H(ωı) − ˆHr(ωı)
976
+ ���
977
+ 2
978
+ max
979
+ ω∈[ωmin,ωmax] ∥H(ωı)∥2
980
+
981
+ ���H − ˆHr
982
+ ���
983
+ L∞
984
+ ∥H∥L∞
985
+ .
986
+ (32)
987
+ Note that strategies to post-process surrogates to obtain stable models have been studied in [29].
988
+ The numerical experiments have been conducted on a laptop equipped with an AMD Ryzen™
989
+ 7 PRO 5850U and 12 GB RAM running Linux Mint 21 as operating system. All algorithms have been
990
+ implemented and run with MATLAB R2021b Update 2 (9.11.0.1837725).
991
+ Code and data availability
992
+ The data that support the findings of this study are openly available in Zenodo at
993
+ doi:10.5281/zenodo.7490158
994
+ under the BSD-2-Clause license, authored by Quirin Aumann and Ion Victor Gosea.
995
+ 4.1. Case of exact measurement data
996
+ In the following, we compare the performance of the new approach LS-Loewner to the following estab-
997
+ lished strategies:
998
+ • Loewner-SVD: Truncate Loewner matrices populated with the complete dataset to order r using
999
+ an SVD [7].
1000
+ • Loewner-CUR: Construct a purely interpolatory model of order r using all data points chosen by
1001
+ the CUR decomposition, similar to [37].
1002
+ • Modified AAA: Apply the strictly-proper variant of AAA [29] to the complete dataset to compute
1003
+ a reduced-order model of size r.
1004
+ We first consider the original MIMO ISS example and a SISO variant where we select the first input
1005
+ and output, respectively, from the MIMO system. To evaluate the overall performance of the different
1006
+ methods related to the size of a surrogate model, we compute the approximated L∞ errors for models
1007
+ with orders 6 ≤ r ≤ 60. The approximation error versus the dimension of the respective surrogate
1008
+ model is depicted in Figure 1 for all four methods.
1009
+ Since tangential interpolation was not employed here, the order of the MIMO surrogates rises by m
1010
+ for each additional interpolation point, i.e., r = km. This explains the lower accuracy of the MIMO
1011
+ surrogate. For the maximum reduced order r = 60, k = 20 interpolation points are considered. The
1012
+ errors of the SISO surrogates for r = 20, i.e., k = 20, is in a similar range as in the MIMO case. The
1013
+ SISO surrogates reach similar levels of approximation for all employed methods. In the MIMO case,
1014
+ Loewner-SVD performs best. This can be explained by the following observation: the other methods
1015
+ always consider the complete transfer function measurement H(λi) ∈ Cp×m per interpolation point,
1016
+ while Loewner-SVD extracts only the r most dominant singular vectors for projection, regardless of
1017
+ to which interpolation point they belong to. In turn, the other methods also consider probably less
1018
+ important parts of the data as long as one input/output combination of the respective sample is
1019
+ relevant for approximation. It can also be noted that LS-Loewner and Loewner-CUR perform very
1020
+ similar. This was expected, as both methods rely on the same interpolation points.
1021
+ All four methods are now employed to compute a surrogate model of size r = 108 to approximate
1022
+ the transfer function of the flexible aircraft model. The size of the surrogate model is determined by
1023
+ truncating all singular values τ < 1·10−6 of an underlying Loewner matrix.
1024
+ The transfer functions of all resulting models and their respective relative errors are given in Figure 2.
1025
+ Again, all methods succeed in computing a sufficiently accurate surrogate. However, the approximation
1026
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1027
+ 2023-01-13
1028
+
1029
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1030
+ 14
1031
+ 10
1032
+ 20
1033
+ 30
1034
+ 40
1035
+ 50
1036
+ 60
1037
+ 10−6
1038
+ 10−5
1039
+ 10−4
1040
+ 10−3
1041
+ 10−2
1042
+ 10−1
1043
+ 100
1044
+ Reduced order r
1045
+ L∞ error
1046
+ SISO
1047
+ 10
1048
+ 20
1049
+ 30
1050
+ 40
1051
+ 50
1052
+ 60
1053
+ 10−6
1054
+ 10−5
1055
+ 10−4
1056
+ 10−3
1057
+ 10−2
1058
+ 10−1
1059
+ 100
1060
+ Reduced order r
1061
+ L∞ error
1062
+ MIMO
1063
+ LS-Loewner
1064
+ Loewner-SVD
1065
+ Loewner-CUR
1066
+ Modified AAA
1067
+ Figure 1: The approximated L∞ errors of reduced-order models of order r computed from the ISS
1068
+ data. Left: SISO with first input and output, respectively. Right: MIMO with three inputs
1069
+ and three outputs m = p = 3 (the number of interpolation points is k = r
1070
+ m).
1071
+ quality of Loewner-CUR is noticeably worse than that of the other three methods. Given that both
1072
+ Loewner-CUR and LS-Loewner use the same interpolation points, the weights computed from the least
1073
+ squares problem show a better performance compared to the partitioning approach used in Loewner-
1074
+ CUR.
1075
+ 4.2. Perturbed measurement data
1076
+ Analyzing measurement data perturbed by noise is a challenging task for interpolatory methods, such
1077
+ as the Loewner framework and the AAA algorithm (as pointed out in, e.g., [27]). In this experiment
1078
+ we investigate the effect of noise to the performance of the four methods described above and show,
1079
+ how enforcing poles and/or interpolation points can increase the approximation quality. In the first
1080
+ experiment we consider transfer function data from the ISS model perturbed by noise with mean µ = 0
1081
+ and standard deviation σ2 = 0.15. We employ LFaPP and enforce poles at ı[.77, 2, 4, 5.6, 9.33, 37.9]
1082
+ near peaks of the transfer function. The resulting real-valued surrogate model has order r = 12. The
1083
+ transfer functions of the surrogate model with enforced poles and reduced models computed from the
1084
+ same noisy data with LS-Loewner, Loewner-SVD, Loewner-CUR, and Modified AAA are given in Figure 3.
1085
+ Enforcing the poles near peaks in the transfer function of the underlying data allows the surrogate
1086
+ to capture the behavior of the original data in a wider frequency range than applying LS-Loewner,
1087
+ Loewner-SVD, and Loewner-CUR. The choice of the locations, in which vicinity the poles should be
1088
+ chosen is, however, not automatized. Figure 3 also shows the relative errors of all surrogate models
1089
+ referenced to the original data without noise. While the enforced poles all have a negative real part,
1090
+ the models computed from the variants of the LF and AAA exhibit unstable eigenvalues. Thus, pole
1091
+ placement can be seen also as a means to enforce stability of the surrogate models. Alternatively,
1092
+ a post-processing step can be added to enforce stable models (for both LF and AAA methods), as
1093
+ performed in [29].
1094
+ We now evaluate the performance of the algorithms by applying them to heavily distorted trans-
1095
+ fer function measurements of the sound transmission problem. Noise with a standard deviation of
1096
+ σ2 = 0.25 is considered and three algorithms are employed to compute surrogates: Loewner-SVD,
1097
+ LFPP (Algorithm 3), and LFaPP (Algorithm 4). We also test the modifications to LFaPP described in
1098
+ Section 3.4. These results are denoted by “LFaPP mod.”. For LFPP we enforce poles at the eigenval-
1099
+ ues of the underlying Loewner model which imaginary parts are near 2πı [72, 189, 392, 401, 706, 856].
1100
+ These locations correspond to characteristic peaks in the transfer function. Further, we choose the in-
1101
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1102
+ 2023-01-13
1103
+
1104
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1105
+ 15
1106
+ 5
1107
+ 10
1108
+ 15
1109
+ 20
1110
+ 25
1111
+ 30
1112
+ 35
1113
+ 40
1114
+ 10−4
1115
+ 10−2
1116
+ 100
1117
+ Magnitude
1118
+ Original data
1119
+ LS-Loewner
1120
+ Loewner-SVD
1121
+ Loewner-CUR
1122
+ Modified AAA
1123
+ 5
1124
+ 10
1125
+ 15
1126
+ 20
1127
+ 25
1128
+ 30
1129
+ 35
1130
+ 40
1131
+ 10−12
1132
+ 10−9
1133
+ 10−6
1134
+ 10−3
1135
+ 100
1136
+ Frequency [Hz]
1137
+ Relative error
1138
+ LS-Loewner
1139
+ Loewner-SVD
1140
+ Loewner-CUR
1141
+ Modified AAA
1142
+ Figure 2: Transfer function (top) and relative pointwise errors (bottom) for reduced-order models of
1143
+ size r = 108 for the aircraft model. The error is plotted only at frequencies which do not
1144
+ coincide to interpolation points of the respective method.
1145
+ terpolation points at 2πı [138, 339, 369, 569, 712, 954], which lie at the dips between the enforced poles.
1146
+ Loewner-SVD and LFaPP do not require input parameters in addition to the measured data. Figure 4
1147
+ shows the transfer function of the resulting surrogate models in comparison to the original and noisy
1148
+ underlying data. It can be observed, that the automatic approaches Loewner-SVD and LFaPP (mod.)
1149
+ cannot approximate the transfer function well after the first two peaks, i.e., for frequencies higher than
1150
+ 200 Hz, while LFPP approximates the original data over the complete frequency range with decent
1151
+ accuracy. The importance of reasonable interpolation points can be seen in the difference of LFPP and
1152
+ LFaPP mod., which have the same poles. It should be noted that the surrogate model computed by
1153
+ Loewner-SVD has two unstable poles while the other three surrogate models are stable. It is, however,
1154
+ not always clear a priori how to choose the poles and interpolation points for LFPP in order to achieve
1155
+ the best approximation quality possible. In this example, the noise level is too high for one of the
1156
+ automatic approaches to yield reasonable dominant interpolation points or poles.
1157
+ 5. Conclusion and outlook
1158
+ In this contribution, we have proposed an extensive study of interpolation-based data-driven ap-
1159
+ proaches for approximating the response of linear dynamical systems.
1160
+ All methods require input
1161
+ and output data, i.e., transfer function measurements, while direct access to the system operators or
1162
+ the states is not required. We showed different approaches how to achieve compact surrogate models
1163
+ approximating the input/output behavior of the original system and how to ensure various properties
1164
+ of the surrogate models, such as stability. Strategies how to work with noisy measurement data have
1165
+ also been addressed.
1166
+ A natural extension of the framework described here is to apply the ideas of tangential interpolation
1167
+ as a means of modeling a MIMO system from data. Here, the tangential directions need to be incorpo-
1168
+ rated in the parameterized one-sided realization. Further topics include enforcing different structures
1169
+ of the original model in the surrogate model, e.g., second-order or delay structures. It would also be
1170
+ interesting to study the possibility of placing certain stable poles while achieving interpolation in a
1171
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1172
+ 2023-01-13
1173
+
1174
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1175
+ 16
1176
+ 10−1
1177
+ 100
1178
+ 101
1179
+ 102
1180
+ 10−5
1181
+ 10−3
1182
+ 10−1
1183
+ Magnitude
1184
+ Noisy data
1185
+ Original data
1186
+ LFaPP
1187
+ LS-Loewner
1188
+ Loewner-SVD
1189
+ Loewner-CUR
1190
+ Modified AAA
1191
+ 10−1
1192
+ 100
1193
+ 101
1194
+ 102
1195
+ 10−4
1196
+ 10−3
1197
+ 10−2
1198
+ 10−1
1199
+ 100
1200
+ 101
1201
+ 102
1202
+ Frequency
1203
+ Relative error
1204
+ Noise
1205
+ LFaPP
1206
+ LS-Loewner
1207
+ Loewner-SVD
1208
+ Loewner-CUR
1209
+ Modified AAA
1210
+ Figure 3: Transfer function of a surrogate with enforced poles compared to the noisy and original
1211
+ transfer function values. The transfer function of a model r = 12 computed from Loewner-
1212
+ SVD is given for reference.
1213
+ least-squares sense. Application cases for the proposed methodology could include damping optimiza-
1214
+ tion. Here, a family of parameterized interpolants could be used to find optimal positions for viscous
1215
+ dampers in a structural system.
1216
+ A. Appendix
1217
+ A.1. The Woodbury matrix identity
1218
+ We can expand the right part of (19), such that:
1219
+ ˆA = Λ − ˆBR ⇒ sIkm − ˆA = sIkm − Λ
1220
+
1221
+ ��
1222
+
1223
+ ˆ
1224
+ M
1225
+ +
1226
+
1227
+ ��
1228
+ ˆ
1229
+ W1
1230
+ ...
1231
+ ˆ
1232
+ Wk
1233
+
1234
+ ��
1235
+ � �� �
1236
+ ˆU
1237
+ �Im
1238
+
1239
+ ����
1240
+ ˆT
1241
+ �Im
1242
+ · · ·
1243
+ Im
1244
+
1245
+
1246
+ ��
1247
+
1248
+ ˆV
1249
+ .
1250
+ (33)
1251
+ The Woodbury matrix identity is as follows:
1252
+
1253
+ ˆM + ˆU ˆT ˆV
1254
+ �−1
1255
+ = ˆM
1256
+ −1 − ˆM
1257
+ −1 ˆU
1258
+
1259
+ ˆT
1260
+ −1 + ˆV ˆM
1261
+ −1 ˆU
1262
+ �−1 ˆV ˆM
1263
+ −1,
1264
+ where ˆM, ˆU, ˆT and ˆV are conformable matrices: ˆM is n × n, ˆT is k × k, ˆU is n × k, and ˆV is k × n.
1265
+ This can be derived using blockwise matrix inversion. By denoting with Λs = sIkm − Λ, then the first
1266
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1267
+ 2023-01-13
1268
+
1269
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1270
+ 17
1271
+ 100
1272
+ 200
1273
+ 300
1274
+ 400
1275
+ 500
1276
+ 600
1277
+ 700
1278
+ 800
1279
+ 900
1280
+ 1,000
1281
+ 10−6
1282
+ 10−2
1283
+ 102
1284
+ Magnitude
1285
+ Noisy data
1286
+ Original data
1287
+ Loewner-SVD
1288
+ LFPP
1289
+ LFaPP
1290
+ LFaPP mod.
1291
+ 100
1292
+ 200
1293
+ 300
1294
+ 400
1295
+ 500
1296
+ 600
1297
+ 700
1298
+ 800
1299
+ 900
1300
+ 1,000
1301
+ 10−3
1302
+ 10−2
1303
+ 10−1
1304
+ 100
1305
+ 101
1306
+ 102
1307
+ 103
1308
+ Frequency [Hz]
1309
+ Relative error
1310
+ Noise
1311
+ Loewner-SVD
1312
+ LFPP
1313
+ LFaPP
1314
+ LFaPP mod.
1315
+ Figure 4: Transfer function (top) and relative pointwise errors (bottom) as well as the added noise for
1316
+ reduced-order models of size r = 12 for the aircraft model.
1317
+ transfer function of the fitted model is written:
1318
+ ˆH(s) = ˆC
1319
+
1320
+ sI3m − ˆA
1321
+ �−1 ˆB = ˆC
1322
+
1323
+ Λs + ˆU ˆV
1324
+ �−1 ˆB
1325
+ = ˆCΛ−1
1326
+ s
1327
+ ˆB − ˆCΛ−1
1328
+ s
1329
+ ˆU
1330
+
1331
+ Im + ˆVΛ−1
1332
+ s
1333
+ ˆU
1334
+ �−1 ˆVΛ−1
1335
+ s
1336
+ ˆB
1337
+ = ˆCΛ−1
1338
+ s
1339
+ ˆB − ˆCΛ−1
1340
+ s
1341
+ ˆB
1342
+
1343
+ Im + RΛ−1
1344
+ s
1345
+ ˆB
1346
+ �−1
1347
+ RΛ−1
1348
+ s
1349
+ ˆB
1350
+ = ˆCΛ−1
1351
+ s
1352
+ ˆB
1353
+
1354
+ Im −
1355
+
1356
+ Im + ˆX
1357
+ �−1 ˆX
1358
+
1359
+ = ˆCΛ−1
1360
+ s
1361
+ ˆB
1362
+
1363
+ Im + ˆX
1364
+ �−1
1365
+ ,
1366
+ (34)
1367
+ where ˆX = RΛ−1
1368
+ s
1369
+ ˆB. Hence, we arrive at (20) and the transfer function ˆH(s) can be written as follows:
1370
+ ˆH(s) = ˆCΛ−1
1371
+ s
1372
+ ˆB
1373
+
1374
+ Im + RΛ−1
1375
+ s
1376
+ ˆB
1377
+ �−1
1378
+ .
1379
+ (20)
1380
+ A.2. Pole placement as in [3]
1381
+ In order to enforce both prescribed poles and certain interpolation conditions in the ROM, we follow
1382
+ the derivations from [3].
1383
+ It is to be noted that this approach is intrusive, i.e., requires access to
1384
+ the system’s matrices. Hence, a descriptor model characterized in (generalized) state-space by the
1385
+ following equations
1386
+ ΣDes :
1387
+
1388
+ E ˙x(t) = Ax(t) + Bu(t),
1389
+ y(t) = Cx(t),
1390
+ (35)
1391
+ with corresponding transfer function HDes(s) = C(sE − A)−1B is considered to be given. For the
1392
+ (right) interpolation points λi, i = 1, . . . , k (where interpolation is imposed), and the desired poles to
1393
+ be placed, denoted with ζj’s, the author in [3] starts by finding a row vector Cζ ∈ C1×n so that:
1394
+
1395
+
1396
+ (λ1E − A)−1B · · · (λkE − A)−1B
1397
+
1398
+ = 01×k.
1399
+ (36)
1400
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1401
+ 2023-01-13
1402
+
1403
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1404
+ 18
1405
+ Then, the next step is to choose projection matrices W, V ∈ Cn×k as
1406
+ WH =
1407
+
1408
+ ��
1409
+ Cζ(ζ1E − A)−1
1410
+ ...
1411
+ ˜C(ζkE − A)−1
1412
+
1413
+ �� ,
1414
+ V =
1415
+ �(λ1E − A)−1B · · · (λkE − A)−1B�
1416
+ .
1417
+ (37)
1418
+ As explained in [3], the choice of WH above is explained by imposing the required poles for the
1419
+ reduced model, while V is chosen to match the interpolation conditions at the λi’s. Moreover, using
1420
+ these notations, it follows that ˜CV = 0. Next, put together the following matrices ˜E = WHEV,
1421
+ ˜A =
1422
+ WHAV. Then, it follows that (s˜E − ˜A) loses rank when s ∈ {ζ1, . . . , ζr}. To show this, we simply
1423
+ write
1424
+ eT
1425
+ j (ζj ˜E − ˜A) = eT
1426
+ j WH(ζjE − A)V = Cζ(ζjE − A)−1(ζjE − A)V = CζV = 0.
1427
+ (38)
1428
+ Let Hζ(s) = Cζ(sE − A)−1B be a rational function in s and we note that ˆE and ˆA are a special type
1429
+ of diagonally scaled Cauchy matrices, with the following exact definition:
1430
+ ˜Ei,j = −Cζ(ζiE − A)−1B − Cζ(λjE − A)−1B
1431
+ ζi − λj
1432
+ = − Hζ(ζi)
1433
+ ζi − λj
1434
+ ˜Ai,j = −ζiCζ(ζiE − A)−1B − λjCζ(λjE − A)−1B
1435
+ ζi − λj
1436
+ = −ζiHζ(ζi)
1437
+ ζi − λj
1438
+ (39)
1439
+ From the definition in (39), it follows that ˜E = −D ˜BCζ,λ, where D ˜B = diag( ˜B) is a diagonal matrix.
1440
+ Similarly, it follows that ˜A = −ZD ˜BCζ,λ, where Z = diag(ζ1, . . . , ζk).
1441
+ Next, we write the other projected quantities as
1442
+ ˜B = WHB =
1443
+ �Hζ(ζ1)
1444
+ · · ·
1445
+ Hζ(ζk)�T ,
1446
+ ˜C = CV =
1447
+ �H(λ1)
1448
+ · · ·
1449
+ H(λk)�
1450
+ (40)
1451
+ Hence, the reduced-order linear descriptor system Σpp : (˜E, ˜A, ˜B, ˜C) matches k interpolation conditions
1452
+ and has the required poles.
1453
+ Next, we show that this model can be written equivalently in the AF format. We first note that
1454
+ ˆC = ˜C. For next step, provided that the matrix ˜E is non-singular, we remove it by incorporating
1455
+ it into the other matrices, as: ˘A = ˜E
1456
+ −1 ˜A,
1457
+ ˘B = ˜E
1458
+ −1 ˜B,
1459
+ ˘E = Ik,
1460
+ ˘C = ˜C. We note that the two
1461
+ realizations of the interpolatory ROM, i.e., ( ˆA, ˆB, ˆC) in (21) and ( ˘A, ˘B, ˘C) introduced above, are
1462
+ actually identical. The reason for this is that ˘C = ˆC and the two ROMs match the same k moments.
1463
+ Hence, it also follows that ˘B = ˆB. Now, since ˘B = ˜E
1464
+ −1 ˜B and ˜E = −D ˜BCζ,λ, we can write that
1465
+ ˆB = −(D ˜BCζ,λ)−1 ˜B = −C−1
1466
+ ζ,λD−1
1467
+ ˜B ˜B = −C−1
1468
+ ζ,λ1k.
1469
+ (41)
1470
+ Hence, the above choice of vector ˆB in (21) imposes the required poles.
1471
+ References
1472
+ [1] A. C. Antoulas. Approximation of Large-Scale Dynamical Systems, volume 6 of Adv. Des. Control.
1473
+ SIAM Publications, Philadelphia, PA, 2005. doi:10.1137/1.9780898718713.
1474
+ [2] A. C. Antoulas. A new result on passivity preserving model reduction. scl, 54(4):361–374, 2005.
1475
+ doi:10.1016/j.sysconle.2004.07.007.
1476
+ [3] A. C. Antoulas. Polplatzierung bei der Modellreduktion (On pole placement in model reduction).
1477
+ at-Automatisierungstechnik, 55(9):443–448, 2007. doi:10.1524/auto.2007.55.9.443.
1478
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1479
+ 2023-01-13
1480
+
1481
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1482
+ 19
1483
+ [4] A. C. Antoulas and B. D. O. Anderson. On the scalar rational interpolation problem. IMA J.
1484
+ Math. Control. Inf., 3(2-3):61–88, 1986. doi:10.1093/imamci/3.2-3.61.
1485
+ [5] A. C. Antoulas, C. A. Beattie, and S. Gugercin. Interpolatory Methods for Model Reduction. Com-
1486
+ putational Science & Engineering. Society for Industrial and Applied Mathematics, Philadelphia,
1487
+ PA, 2020. doi:10.1137/1.9781611976083.
1488
+ [6] A. C. Antoulas, I. V. Gosea, and A. C. Ionita. Model reduction of bilinear systems in the Loewner
1489
+ framework. SIAM J. Sci. Comput., 38(5):B889–B916, 2016. doi:10.1137/15M1041432.
1490
+ [7] A. C. Antoulas, S. Lefteriu, and A. C. Ionita. A tutorial introduction to the Loewner framework
1491
+ for model reduction. In Model Reduction and Approximation, chapter 8, pages 335–376. SIAM,
1492
+ 2017. doi:10.1137/1.9781611974829.ch8.
1493
+ [8] A. Astolfi. Model reduction by moment matching for linear and nonlinear systems. IEEE Trans.
1494
+ Automat. Contr., 55(10):2321–2336, 2010. doi:10.1109/TAC.2010.2046044.
1495
+ [9] Q. Aumann. Matrices for a sound transmission problem. Hosted at MORwiki – Model Order
1496
+ Reduction Wiki, 2022. doi:10.5281/zenodo.7300347.
1497
+ [10] P. J. Baddoo. The AAAtrig algorithm for rational approximation of periodic functions. SIAM J.
1498
+ Sci. Comput., 43(5):A3372–A3392, 2021. doi:10.1137/20M1359316.
1499
+ [11] Z. Bai. Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems.
1500
+ Appl. Numer. Math., 43(1-2):9–44, 2002. doi:10.1016/S0168-9274(02)00116-2.
1501
+ [12] P. Benner, P. Goyal, and P. Van Dooren. Identification of port-Hamiltonian systems from fre-
1502
+ quency response data. Syst. Control Lett., 143:104741, 2020. doi:10.1016/j.sysconle.2020.
1503
+ 104741.
1504
+ [13] P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W. Schilders, and L. M. Silveira. System-
1505
+ and Data-Driven Methods and Algorithms. De Gruyter, 2021. doi:doi:10.1515/9783110498967.
1506
+ [14] P. Benner, M. Ohlberger, A. Cohen, and K. Willcox.
1507
+ Model Reduction and Approximation.
1508
+ Society for Industrial and Applied Mathematics, Philadelphia, PA, 2017.
1509
+ doi:10.1137/1.
1510
+ 9781611974829.
1511
+ [15] J.-P. Berrut and L. N. Trefethen. Barycentric Lagrange interpolation. SIAM Rev., 46(3):501–517,
1512
+ 2004. doi:10.1137/S0036144502417715.
1513
+ [16] A. Carracedo Rodriguez, L. Balicki, and S. Gugercin.
1514
+ The p-AAA algorithm for data driven
1515
+ modeling of parametric dynamical systems. e-print 2003.06536, arXiv, 2020. math.NA. URL:
1516
+ https://arxiv.org/abs/2003.06536.
1517
+ [17] K. Cherifi, P. Goyal, and P. Benner. A greedy data collection scheme for linear dynamical systems.
1518
+ Data-Centric Eng., 3, 2022. doi:10.1017/dce.2022.16.
1519
+ [18] Anritsu
1520
+ Corporation.
1521
+ Understanding
1522
+ Vector
1523
+ Network
1524
+ Analysis.
1525
+ URL:
1526
+ https://www.
1527
+ rekirsch.at/user_html/1282834349/pix/a/media/ME7838A/Understanding_Vector_
1528
+ Network_Analysis.pdf.
1529
+ [19] Z. Drmaˇc and B. Peherstorfer. Learning low-dimensional dynamical-system models from noisy
1530
+ frequency-response data with Loewner rational interpolation.
1531
+ In Realization and Model Re-
1532
+ duction of Dynamical Systems, pages 39–57. Springer International Publishing, 2022.
1533
+ doi:
1534
+ 10.1007/978-3-030-95157-3_3.
1535
+ [20] K. Gallivan, A. Vandendorpe, and P. Van Dooren. Sylvester equations and projection-based model
1536
+ reduction. J. Comput. Appl. Math., 162(1):213–229, 2004. doi:10.1016/j.cam.2003.08.026.
1537
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1538
+ 2023-01-13
1539
+
1540
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1541
+ 20
1542
+ [21] E. Garc´ıa-Canseco, A. Alvarez-Aguirre, and J. M. A. Scherpen. Modeling for control of a kinematic
1543
+ wobble-yoke Stirling engine. Renew. Energy, 75:808–817, 2015. doi:10.1016/j.renene.2014.
1544
+ 10.038.
1545
+ [22] A. Gopal and L. N. Trefethen. Representation of conformal maps by rational functions. Numer.
1546
+ Math., 142(2):359–382, 2019. doi:10.1007/s00211-019-01023-z.
1547
+ [23] I. V. Gosea and A. C. Antoulas. Stability preserving post-processing methods applied in the
1548
+ Loewner framework. In IEEE 20th Workshop on Signal and Power Integrity (SPI), Turin, Italy,
1549
+ May 8–11, pages 1–4, 2016. doi:10.1109/SaPIW.2016.7496283.
1550
+ [24] I. V. Gosea and A. C. Antoulas. Data-driven model order reduction of quadratic-bilinear systems.
1551
+ Numer. Linear Algebra Appl., 25(6):e2200, 2018. doi:10.1002/nla.2200.
1552
+ [25] I. V. Gosea and A. C. Antoulas. The one-sided Loewner framework and connections to other
1553
+ model reduction methods based on interpolation.
1554
+ IFAC-PapersOnLine, 55(30):377–382, 2022.
1555
+ doi:10.1016/j.ifacol.2022.11.082.
1556
+ [26] I. V. Gosea and S. Gugercin.
1557
+ Data-driven modeling of linear dynamical systems with
1558
+ quadratic output in the AAA framework.
1559
+ J. Sci. Comput., 91(1):1–28, 2022.
1560
+ doi:10.1007/
1561
+ s10915-022-01771-5.
1562
+ [27] I. V. Gosea and S. G¨uttel. Algorithms for the rational approximation of matrix-valued functions.
1563
+ SIAM J. Sci. Comput., 43(5):A3033–A3054, 2021. doi:10.1137/20m1324727.
1564
+ [28] I. V. Gosea, M. Petreczky, and A. C. Antoulas.
1565
+ Reduced-order modeling of LPV systems in
1566
+ the Loewner framework. In 2021 60th IEEE Conference on Decision and Control (CDC), pages
1567
+ 3299–3305, 2021. doi:10.1109/CDC45484.2021.9683742.
1568
+ [29] I. V. Gosea, C. Poussot-Vassal, and A. C. Antoulas. On enforcing stability for data-driven reduced-
1569
+ order models. In 29th Mediterranean Conference on Control and Automation (MED), Virtual,
1570
+ pages 487–493, 2021. doi:10.1109/MED51440.2021.9480216.
1571
+ [30] I. V. Gosea, C. Poussot-Vassal, and A. C. Antoulas. On Loewner data-driven control for infinite-
1572
+ dimensional systems. In 2021 European Control Conference (ECC), pages 93–99. IEEE, 2021.
1573
+ doi:10.23919/ECC54610.2021.9655097.
1574
+ [31] B. Gustavsen and A. Semlyen. Rational approximation of frequency domain responses by vector
1575
+ fitting. IEEETransPD, 14(3):1052–1061, 1999. doi:10.1109/61.772353.
1576
+ [32] R. W. Guy. The transmission of airborne sound through a finite panel, air gap, panel and cavity
1577
+ configuration-a steady state analysis. Acta Acust. United Acust., 49(4):323–333, 1981.
1578
+ [33] T. C. Ionescu and A. Astolfi. Nonlinear moment matching-based model order reduction. IEEE
1579
+ Trans. Autom. Contr., 61(10):2837–2847, 2016. doi:10.1109/TAC.2015.2502187.
1580
+ [34] T. C. Ionescu, A. Astolfi, and P. Colaneri. Families of moment matching based, low order ap-
1581
+ proximations for linear systems. Syst. Control Lett., 64:47–56, 2014. doi:10.1016/j.sysconle.
1582
+ 2013.10.011.
1583
+ [35] T. C. Ionescu, O. V. Iftime, and I. Necoara. Model reduction with pole-zero placement and high
1584
+ order moment matching.
1585
+ Automatica, 138:110140, 2022.
1586
+ doi:10.1016/j.automatica.2021.
1587
+ 110140.
1588
+ [36] J. N. Juang and R. S. Pappa. An eigensystem realization algorithm for modal parameter identifi-
1589
+ cation and model reduction. J. Guid. Control Dyn., 8(5):620–627, 1985. doi:10.2514/3.20031.
1590
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1591
+ 2023-01-13
1592
+
1593
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1594
+ 21
1595
+ [37] D. S. Karachalios, I. V. Gosea, and A. C. Antoulas. The Loewner framework for system identifica-
1596
+ tion and reduction. In P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W. H. A. Schilders,
1597
+ and L. M. Silveira, editors, Methods and Algorithms, volume 1 of Handbook on Model Reduction.
1598
+ De Gruyter, 2021. doi:10.1515/9783110498967-006.
1599
+ [38] D. S. Karachalios, I. V. Gosea, and A. C. Antoulas.
1600
+ On bilinear time-domain identification
1601
+ and reduction in the Loewner framework. In Model Reduction of Complex Dynamical Systems,
1602
+ volume 171 of International Series of Numerical Mathematics, pages 3–30. Birkh¨auser, Cham,
1603
+ 2021. doi:10.1007/978-3-030-72983-7_1.
1604
+ [39] P. Lietaert, K. Meerbergen, J. P´erez, and B. Vandereycken. Automatic rational approximation
1605
+ and linearization of nonlinear eigenvalue problems. IMA J. Numer. Anal., 42(2):1087–1115, 2022.
1606
+ doi:10.1093/imanum/draa098.
1607
+ [40] M. W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proc.
1608
+ Natl. Acad. Sci., 106(3):697–702, 2009. doi:10.1073/pnas.0803205106.
1609
+ [41] A. J. Mayo and A. C. Antoulas.
1610
+ A framework for the solution of the generalized realization
1611
+ problem. Linear Algebra Appl., 425(2-3):634–662, 2007. doi:10.1016/j.laa.2007.03.008.
1612
+ [42] Y. Nakatsukasa, O. Sete, and L. N. Trefethen. The AAA algorithm for rational approximation.
1613
+ SIAM J. Sci. Comput., 40(3):A1494–A1522, 2018. doi:10.1137/16M1106122.
1614
+ [43] M. Nakhla and J. Vlach. A piecewise harmonic balance technique for determination of periodic
1615
+ response of nonlinear systems. IEEE Trans. Circuits Syst., 23(2):85–91, 1976. doi:10.1109/TCS.
1616
+ 1976.1084181.
1617
+ [44] Niconet e.V., http://www.slicot.org. SLICOT - Subroutine Library in Systems and Control
1618
+ Theory.
1619
+ [45] A. Padoan. On model reduction by least squares moment matching. In 2021 60th IEEE Conference
1620
+ on Decision and Control (CDC), pages 6901–6907. IEEE, 2021. doi:10.1109/CDC45484.2021.
1621
+ 9683008.
1622
+ [46] A. Padoan and A. Astolfi. Model reduction by moment matching for ZIP systems. In 53rd IEEE
1623
+ Conference on Decision and Control. IEEE, 2014. doi:10.1109/cdc.2014.7039954.
1624
+ [47] B. Peherstorfer, S. Gugercin, and K. Willcox.
1625
+ Data-driven reduced model construction with
1626
+ time-domain Loewner models. SIAM J. Sci. Comput., 39(5):A2152–A2178, 2017. doi:10.1137/
1627
+ 16M1094750.
1628
+ [48] B. Peherstorfer and K. Willcox. Data-driven operator inference for nonintrusive projection-based
1629
+ model reduction. Comp. Meth. Appl. Mech. Eng., 306:196–215, 2016. doi:10.1016/j.cma.2016.
1630
+ 03.025.
1631
+ [49] J. C. Peyton Jones and K. S. A. Yaser. Recent advances and comparisons between harmonic
1632
+ balance and volterra-based nonlinear frequency response analysis methods.
1633
+ Nonlinear Dyn.,
1634
+ 91(1):131–145, 2018. doi:10.1007/s11071-017-3860-z.
1635
+ [50] C. Poussot-Vassal, D. Quero, and P. Vuillemin.
1636
+ Data-driven approximation of a high fidelity
1637
+ gust-oriented flexible aircraft dynamical model.
1638
+ In IFAC PaperOnLine (9th Vienna Interna-
1639
+ tional Conference on Mathematical Modelling), volume 51, pages 559–564, 2018. doi:10.1016/
1640
+ j.ifacol.2018.03.094.
1641
+ [51] J. Rommes and N. Martins.
1642
+ Efficient computation of transfer function dominant poles using
1643
+ subspace acceleration.
1644
+ IEEE Trans. Power Syst., 21(3):1218–1226, aug 2006.
1645
+ doi:10.1109/
1646
+ tpwrs.2006.876671.
1647
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1648
+ 2023-01-13
1649
+
1650
+ Q. Aumann, I. V. Gosea: Data-driven interpolation: challenges and solutions
1651
+ 22
1652
+ [52] A. Antoulas S. Gugercin and M. Bedrossian. Approximation of the international space station
1653
+ 1r and 12a flex models. In Proceedings of the IEEE Conference on Decision and Control, pages
1654
+ 1515–1516, 2001. doi:10.1109/CDC.2001.981109.
1655
+ [53] G. Scarciotti and A. Astolfi. Nonlinear model reduction by moment matching. Found. Trends
1656
+ Syst. Control, 4(3–4):224–409, 2017. doi:10.1561/2600000012.
1657
+ [54] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech.,
1658
+ 656:5–28, 2010. doi:10.1017/S0022112010001217.
1659
+ [55] J. D. Simard and A. Astolfi. Nonlinear model reduction in the Loewner framework. IEEE Trans.
1660
+ Autom. Contr., 66(12):5711–5726, 2021. doi:10.1109/TAC.2021.3110809.
1661
+ [56] D. C. Sorensen and M. Embree. A DEIM induced CUR factorization. SIAM J. Sci. Comput.,
1662
+ 38(3):A1454–A1482, 2016. doi:10.1137/140978430.
1663
+ [57] The MORwiki Community. Flexible aircraft. Hosted at MORwiki – Model Order Reduction
1664
+ Wiki, 2018. URL: https://morwiki.mpi-magdeburg.mpg.de/morwiki/index.php/Flexible_
1665
+ Aircraft.
1666
+ [58] P. Van Overschee and B. De Moor. N4SID: Subspace algorithms for the identification of combined
1667
+ deterministic-stochastic systems. Automatica, 30(1):75–93, 1994. doi:10.1016/0005-1098(94)
1668
+ 90230-5.
1669
+ [59] P. Van Overschee and B. De Moor.
1670
+ Subspace identification for linear systems:
1671
+ The-
1672
+ ory—Implementation—Applications. Springer Science & Business Media, 2012.
1673
+ [60] H. Wilber, A. Damle, and A. Townsend.
1674
+ Data-driven algorithms for signal processing with
1675
+ trigonometric rational functions. SIAM J. Sci. Comput., 44(3):C185–C209, 2022. doi:10.1137/
1676
+ 21M1420277.
1677
+ Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).
1678
+ 2023-01-13
1679
+
HNE4T4oBgHgl3EQfHwxv/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff