• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle












  • Philosophical masturbation

    I couldn’t have put it better myself. You’ve said lots of philosophical words without actually addressing any of my questions:

    How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they’ve been told are beautiful to approximate?

    How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you?

    How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?


  • An AI doesn’t understand. It has an internal model which produces outputs, based on the training data it received and a prompt. That’s a different cathegory than “understanding”.

    Is it? That’s precisely how I’d describe human understanding. How is our internal model, trained on our experiences, which generates responses to input, fundamentally different from an LLM transformer model? At best we’re multi-modal, with overlapping models which we move information between to consider multiple perspectives.


  • “Beauty”, “opinion”, “free will”, “try”. These are vague, internal concepts. How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they’ve been told are beautiful to approximate? How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you? How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?

    What does a human child draw? Just a rosebush, poorly at that. Does that mean humans have no artistic potential? AI is still in relative infancy, the artistic stage of imitation and technique refinement. We are only just beginning to see the first glimmers of multi-modal AI, recursive models that can talk to themselves and pass information between different internal perspectives. Some would argue that internal dialogue is precisely the mechanism that makes human thought so sophisticated. What makes you think that AI won’t quickly develop similar sophistication as the models are further developed?


  • A person is also very much adding rose bushes and story beats to their internal databases. You learn to paint by copying other painters, adding their techniques to a database. You learn to write by reading other authors, adding their techniques to a database. Original styles/compositions are ultimately just a rehashing of countless tiny components from other works.

    An AI understands what they see, otherwise they wouldn’t be able to generate a “rose bush” when you ask for one. It’s an understanding based on a vector space of token sequence weights, but unless you can describe the actual mechanism of human thought beyond vague concepts like “inspiration”, I don’t see any reason to assume that our understanding is not just a much more sophisticated version of the same mechanism.

    The difference is that we’re a black box, AI less so. We have a better understanding of how AI generates content than how the meat of our brain generates content. Our ignorance, and use of vague romantic words like “inspiration” and “understanding”, is absolutely not proof that we’re fundamentally different in mechanism.


  • Probably because the mechanism of American elections makes it a binary choice, and the options are between “more than 30k dead” and “WAY more than 30k dead”. Half-assed, milquetoast hesitation toward genocide is preferable to enthusiastic support for genocide (not to mention enthusiastic support for other genocides), which is the alternative on the ballot. Do you defend the alternative?