Interestingly, in two sections of the discussion, at 00:33:21 and then again at 02:18:30, Tegmark draws the distinction between unidirectional transformer models and recurrent neural networks and suggests that both what humans call reasoning and consciousness require recurrence. But then, discussing how developers can wrap applications around transformer models, as I previously discussed in relation to Auto-GPT, Baby AGI, and Generative Agents, it may be that recurrence can be achieved by these models by adding external storage and retrieval and feedback mechanisms communicating with it in natural language.
3 Likes