Skip to content

Recommended Read: We have all been lied to…

“The role of transformers in inference was never the intent. It emerged as a byproduct of scaling, where high language diversity, hyperparameters tuned for variance, and autoregressive prediction started producing results that mimic reasoning. But they still are just a remapping of statistical alignment, not structured thought. Cosine similarity and dot product do not encode meaning.. they approximate similarity by aligning token vectors based on their angular relationships, a mathematical shortcut that works well for translation and pattern matching but is insufficient when applied to logical inference.”

https://www.linkedin.com/posts/timnit-gebru-7b3b407_ai-activity-7301705166653734912-PGDD/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAAGWgBwEWC05QJCiISocxZAUU1lXx1RRQ

Leave a Reply

Your email address will not be published. Required fields are marked *