ChatGPT launched late in 2022. For two years, I remained a skeptic, noting that 1) with the early versions I frequently spent as much time fact checking and verifying as the time I supposedly should have been saving and often some of those facts were fictitious; and 2) the models were never up to date with current information. By the end of 2024, however, two significant advances in the LLM technology shifted my thinking. Most importantly, in my opinion, it became easy to implement retrieval augmented generation (RAG), where one uses search to find relevant information on the internet, in the scientific literature, etc. This largely solved the issue of being up to date, and provides links to the information the generated content is derived from. Second, the technology behind the models has continued to improve, and in conjunction with RAG, this has reduced the issue of fictitious outputs. As a result of these, today I find myself using Gen AI routinely. The evolution of my own thinking, understanding and skills in Gen AI make me think we should be teaching students how to use this tool. This year we began introducing Gen AI to a course in Python computing in science and engineering through GitHUB Copilot in Jupyter Lab, and a course-based Chat bot in Discord. In many ways, I see it as an equivalent development of Google search in the early 2000's. Nevertheless, Gen AI has evolved so quickly, it is not easy to see what its impact on learning and education will be. As we explore how to use and integrate it, we should be prepared for both things that work, and that things that may be detrimental. I have seen a broad range of opinions among faculty and students on Gen AI that range from enthusiast to critic to skeptic and indifferent. Gen AI remains complex with challenging ethical, social and technical issues, but it is here to stay whether we like it or not.