
{"id":4938,"date":"2024-02-25T02:14:58","date_gmt":"2024-02-25T07:14:58","guid":{"rendered":"https:\/\/ikriv.com\/blog\/?p=4938"},"modified":"2024-02-25T02:29:46","modified_gmt":"2024-02-25T07:29:46","slug":"running-pytorch-with-apache-mod_wsgi","status":"publish","type":"post","link":"https:\/\/ikriv.com\/blog\/?p=4938","title":{"rendered":"Running Pytorch with Apache mod_wsgi"},"content":{"rendered":"<p><strong>TL;DR<\/strong> make sure to add magic words <code>WSGIApplicationGroup %{GLOBAL}<\/code> to the Apache config, otherwise <code>import torch<\/code> will hang.<\/p>\n<p>I tried to integrate my PyTorch AI model with my Apache web site, so I could play with it interactively. I chose to use raw WSGI script, since I did not want to invest in creating a full-blown Django or Flask solution. By the way, here&#8217;s a <a href=\"https:\/\/pytorch.org\/tutorials\/intermediate\/flask_rest_api_tutorial.html\">tutorial on how to inregrate Pytorch with Flask<\/a>.<\/p>\n<p>The &#8216;hello-world&#8217; WSGI script worked, but importing torch caused WSGI process to hang, with eventual &#8220;gateway timeout&#8221; returned to client.<\/p>\n<p>After a few hours, I found the reason. WSGI uses Python sub-interpreter by default, and apparently PyTorch cannot run in a sub-inerpeter. To prevent WSGI from using a sub-interpreter, it should run in daemon mode as part of &#8220;global group&#8221;. The working version of my Apache virtual host config contains the following WSGI-related directives:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\nWSGIScriptAlias \/api\/wsgi \/var\/www\/mysite\/api\/wsgi.py\r\nWSGIDaemonProcess mysite processes=2 threads=5 display-name=ivk-wsgi\r\nWSGIApplicationGroup %{GLOBAL}\r\n<\/pre>\n<p>Additional notes:<\/p>\n<ul>\n<li>WSGI process still shows up as &#8216;apache2&#8217; when listing processes via <code>ps -A<\/code>. It shows up as &#8216;ivk-wsgi&#8217; when using <code>ps -ax<\/code>.<\/li>\n<li>WSGI process cannot run as root, it runs as www-data.<\/li>\n<li>In a docker container, gdb by default refuses to attach to other user&#8217;s processes, even if you are root.<\/li>\n<li>To overcome that, use <code>--privileged<\/code> switch as follows: <code>docker exec --privileged -it <i>container<\/i> bash<\/code><\/li>\n<\/ul>\n<p>The problem with hanging torch will affect any WSGI server, including Django and Flask, as evidenced by this StackOverflow:<br \/>\nhttps:\/\/stackoverflow.com\/questions\/62788479\/how-to-use-pytorch-in-flask-and-run-it-on-wsgi-mod-for-apache2.<\/p>\n<p>So, even if I went ahead with the Flask tutorial, I would have faced the same problem, with more moving parts to debug.<\/p>\n<p>PS. Maybe I should have listened to the <a href=\"https:\/\/medium.com\/django-deployment\/which-wsgi-server-should-i-use-a70548da6a83\">advice to use gunicorn instead of mod_wsgi<\/a>, but using modules seemed cleaner, and &#8220;gunicorn&#8221; also has problematic pronunciation issues. Do you render it as &#8220;gunny corn&#8221;, &#8220;goony corn&#8221;, or &#8220;gee unicorn&#8221; (<a href=\"https:\/\/github.com\/benoitc\/gunicorn\/issues\/139\">answer<\/a>)? Anyway, I ended up using mod_wsgi.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>TL;DR make sure to add magic words WSGIApplicationGroup %{GLOBAL} to the Apache config, otherwise import torch will hang. I tried to integrate my PyTorch AI model with my Apache web <a href=\"https:\/\/ikriv.com\/blog\/?p=4938\" class=\"more-link\">[&hellip;]<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"Layout":"","footnotes":""},"categories":[4],"tags":[],"class_list":["entry","author-ikriv","post-4938","post","type-post","status-publish","format-standard","category-hack"],"_links":{"self":[{"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4938","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4938"}],"version-history":[{"count":8,"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4938\/revisions"}],"predecessor-version":[{"id":4946,"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4938\/revisions\/4946"}],"wp:attachment":[{"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4938"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4938"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ikriv.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4938"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}