Thanks for paving an interesting path toward two books I've been meaning to read but haven't yet gotten to.
I tend to be wary of generalizations, and "generations" are no exception. Still, the unique circumstances surrounding this generation—such as the massive exposure to social media and cellphones—do seem to nudge people toward common behaviors that may strike earlier generations as odd, worrisome, or worse.
The points about AI deskilling are especially thought-provoking. As an AI power user, I've noticed that interacting with these tools can shape our expectations of others. AI tends to be reliably kind and eager to help (with notable exceptions like Grok "fun mode"), which might make relating to real people harder since humans aren't as consistent. One small clarification: the idea of algorithms "ruling" AI ("or those humans who control its algorithms") seems more relevant to social media platforms. Large language models, on the other hand, emerge from extensive training with minimal direct guidance. Developers often describe working with them as akin to interacting with an alien intelligence.
This is a fascinating topic, and I appreciate the insights you've shared!
Also, I think this trend spans generations. I also mentioned how mid century Evangelicalism helped unwittingly promote circumstances that led to this. I just think it's the current generation that is getting the worst of it.
Thanks, Scott, for your reply, and for pointing me toward the edited version of the paragraph. The revised paragraph thoughtfully broadens the scope of AI risk beyond algorithmic control to consider both intentional exploitation and emergent behaviors. Having said that, while the Vision/Ultron analogy is compelling, I think there's an even more immediate concern that does not depend on the future direction of AI development but rather on our propensity to use AI and the density of our use of these tools: how AI's reliable, accommodating nature might be subtly rewiring our expectations of interaction. As we grow accustomed to entities that never tire, never take offense, and always engage at our preferred pace and depth, we may find ourselves less equipped to handle the beautiful messiness of real human relationships and dialogue. Stil, let's do hope that Vision is in the works, and that these tools become a pole vault that propels people to express themselves they otherwise wouldn't have been able to.
Thanks Federico. I saw the article on McKnight’s page. Seemed a little simplistic with regards to expectations of what AI will eventually be able to do. AI is in its infancy. I think that AI may eventually be able to write solid sermons, but I don’t think AI will ever be able to shepherd people.
Thanks for paving an interesting path toward two books I've been meaning to read but haven't yet gotten to.
I tend to be wary of generalizations, and "generations" are no exception. Still, the unique circumstances surrounding this generation—such as the massive exposure to social media and cellphones—do seem to nudge people toward common behaviors that may strike earlier generations as odd, worrisome, or worse.
The points about AI deskilling are especially thought-provoking. As an AI power user, I've noticed that interacting with these tools can shape our expectations of others. AI tends to be reliably kind and eager to help (with notable exceptions like Grok "fun mode"), which might make relating to real people harder since humans aren't as consistent. One small clarification: the idea of algorithms "ruling" AI ("or those humans who control its algorithms") seems more relevant to social media platforms. Large language models, on the other hand, emerge from extensive training with minimal direct guidance. Developers often describe working with them as akin to interacting with an alien intelligence.
This is a fascinating topic, and I appreciate the insights you've shared!
Also, I think this trend spans generations. I also mentioned how mid century Evangelicalism helped unwittingly promote circumstances that led to this. I just think it's the current generation that is getting the worst of it.
Okay. You convinced me to adjust my thoughts a little. I've edited that one comment to better reflect how AI works. Interested in your thoughts.
Thanks, Scott, for your reply, and for pointing me toward the edited version of the paragraph. The revised paragraph thoughtfully broadens the scope of AI risk beyond algorithmic control to consider both intentional exploitation and emergent behaviors. Having said that, while the Vision/Ultron analogy is compelling, I think there's an even more immediate concern that does not depend on the future direction of AI development but rather on our propensity to use AI and the density of our use of these tools: how AI's reliable, accommodating nature might be subtly rewiring our expectations of interaction. As we grow accustomed to entities that never tire, never take offense, and always engage at our preferred pace and depth, we may find ourselves less equipped to handle the beautiful messiness of real human relationships and dialogue. Stil, let's do hope that Vision is in the works, and that these tools become a pole vault that propels people to express themselves they otherwise wouldn't have been able to.
Thank you Federico. Another well-thought out implication. Appreciate the dialogue.
Thank you, Scott. By the way, over at Scot McKnight's blog, Mike Glenn published a piece about AI and sermon writing:
https://scotmcknight.substack.com/p/ai-cant-preach
I haven't gotten around to writing a comment there, but I don't entirely agree with the take on AI there. Interesting, though.
Thanks Federico. I saw the article on McKnight’s page. Seemed a little simplistic with regards to expectations of what AI will eventually be able to do. AI is in its infancy. I think that AI may eventually be able to write solid sermons, but I don’t think AI will ever be able to shepherd people.
Even as I agree with what you've said here, I find I've given up on the idea of saying what AI won't be able to do. :)