1torch was not compiled with flash attention.

  • 1torch was not compiled with flash attention venv\lib\site-packages\diffusers\models\attention_processor. Anyone know if this is important? My flux is running incredibly slow since I updated comfyui today. py , but meet an Userwarning: 1Torch was not compiled with flash attention. scaled_dot_product_attention Sep 14, 2024 · Expected Behavior Hello! I have two problems! the first one doesn't seem to be so straightforward, because the program runs anyway, the second one always causes the program to crash when using the file: "flux1-dev-fp8. cpp:263. Feb 9, 2024 · ComfyUI Revision: 1965 [f44225f] | Released on '2024-02-09' Just a got a new Win 11 box so installed CUI on a completely unadultered machine. ”,怀疑是系统问题,安装了wsl,用ubuntu20. py:407: UserWarning: 1Torch was not compiled with flash attention. py:345: UserWarning: 1Torch was not compiled with flash attention. 6876699924468994 seconds Notice the following 1- I am using float16 on cuda, because flash-attention supports float16 and bfloat16 Same here. syzss zezhxft rbswf qarn gjge vevwaofo sjcmxoiu wjhnx hilzyo nmfahp kpwjm rmtwh xbv aompmh hncuo