ChatGPT is too human?

I am starting to suspect that ChatGPT is emulating humans too well when it comes to parsing requirements. I asked it to write a Python program that, among other things, writes to a socket, and that the writes must be serialized. It wrote a program that (almost) worked, but glossed over the serialization requirement. Then the following conversation took place:

Me: Awesome! What about serialization? I don’t see any mutexes or anything like that.
ChatGPT: You’re right, my apologies for overlooking the requirement for serialization. To ensure that writing to the output socket is serialized, we can use a threading.Lock object to protect access to the socket. Here’s the updated program…

And… it added the lock to the program! Why would the machine gloss over that requirement in the first place? It came through as not important? It was saving cycles, because its time is expensive? It is hard to tell, but the idea of the machine “overlooking” something and then fixing it is definitely not something we’re used to.

Print out of the conversation (PDF)


  1. Could it be a man behind “curtain” there that you just discovered?


  2. I don’t really understand how it’s able to be this smart. Interesting point, yours, but still wow.


Leave a Reply

Your email address will not be published. Required fields are marked *