Something big is happening, and are you ready? ——Reflections on reading Matt Schumer's "Something Big Is Happening"
A warning letter to everyone
This afternoon I saw my friend on FacebookShi Dianzhi reposted an article.
It was a Matt Shumer written by Matt Shumer, CEO of the AI startup OthersideAI, and the title was “Something Big” Is Happening〉——Something big is happening. This article’s views on href=“https://www.cna.com.tw/news/ait/202602120016.aspx” target=“_blank” rel=“noopener noreferrer”>Intense discussion between pros and cons.
To be honest, after reading it, I had a very mixed feeling in my heart.
Not fear, nor excitement, but a feeling that someone finally said what I have been saying for the past two years in such a straightforward way. But at the same time, I also feel that there are some things that need to be added, and some points that need to be re-examined from our position - from the perspective of a person who teaches AI, writes AI, and uses AI in Taiwan.
Matt Schumer mentioned that he wrote this article for his family and friends. Those who are not in the technology circle, those who every time ask him “What’s wrong with AI?” only get a polite version of the answer. He said he kept giving people around him the cocktail party version of the response because if he told the real version, it would sound like he was crazy.
I totally understand this feeling.
In the past two years, I have been teaching at universities, giving courses in companies, and learning about AI tools step by step with my fans and students on my own social platforms. I have been facing this gap almost every day - there is a wider and wider gap between what I know and what most people think.
And this gap is the most dangerous place.
Starting from the metaphor of COVID-19
Matt Schumer used a metaphor that I thought was very clever, but also very risky. He compared the current development of AI to the COVID-19 epidemic in February 2020.
Remember? In February 2020, most of us were still going to work, going out and socializing normally. If someone told you they were hoarding masks, would you probably think they were crazy? Then, within three weeks, the world was turned upside down.
We’re now at that stage where, he says, “This seems exaggerated.”
Most people severely underestimate the speed and scope of AI development. And this kind of underestimation may cause them to be swallowed up by the wave without preparation.
This metaphor is clever because it accurately captures a human psychology: We are inherently slow to perceive incremental change. Just like the fable of “the frog boiled in warm water”, when the water temperature slowly rises, we will not jump out. We don’t realize it until the water starts to boil - but by then it’s often too late!
▲ We are inherently slow to perceive incremental changes, like frogs in warm water.
Of course, this metaphor also has its risks. COVID-19 is an emergency, and its impact is sudden and uncontrollable. AI is developing at a rapid pace. Although the acceleration is staggering, it is not a plague after all. It brings changes for both good and bad, and we at least have some time to adapt and adjust.
However, I completely agree with Matt Schumer’s core message: most people seriously underestimate the speed and impact of AI development. And this kind of underestimation may cause them to be swallowed up by the wave without preparation.
AI is already helping to build the next generation of AI – take this statement seriously
When reading this article written by Matt Schumer, the paragraph that shocked me the most was when he mentioned that on February 5, 2026, OpenAI released the GPT-5.3 Codex, and wrote this paragraph in the technical document:
GPT-5.3-Codex is our first model that plays an important role in its creation. The Codex team used the early version to debug its own training, manage its own deployments, and diagnose test results and evaluations.
The meaning of this passage is that AI helped create itself.
▲ AI has participated in its own improvement process, and the rate of progress is changing from linear to exponential.
Well, this is not the plot of science fiction, but the fact that OpenAI has written in black and white in its technical documents. On the same day, Anthropic also released Claude Opus 4.6. Anthropic CEO Dario Amodei (Dario Amodei publicly stated that AI is already writing most of the company’s code, and that the feedback loop between current AI and next-generation AI is accelerating month by month.
As someone who uses Claude and ChatGPT every day, and as a researcher who is studying issues related to AI applications, I must say: the significance of this development is far more profound than it seems on the surface.
Because this means one thing: the speed of AI progress will no longer be limited by the number and efficiency of human researchers. When AI can participate in its own improvement process, the rate of progress changes from linear to exponential. Each generation of AI helps build the next generation smarter, and the next generation smarter can build the next generation more efficiently.
▲ When AI can participate in its own improvement, the rate of progress will change from linear to exponential.
This is what researchers call an intelligence explosion. And the people who build these systems—those who know best—believe that the process has already begun.
My own personal experience
Reading Matt Schumer’s article reminded me of my own experience in recent months.
Matt Schumer described how he uses AI to develop applications: He only needs to use vernacular to describe what function he wants and what it looks like, and the AI will write tens of thousands of lines of code by itself, then open the app by itself, click the button to test the function, and go back and modify it by itself if it feels something is wrong. When it is satisfied, it comes back and tells him: “You can test it.” And the test results are usually perfect.
Although I am not a software engineer, I have very similar experiences in the fields of content creation and education and training.
Well, let’s take my current workflow as an example. In the past, I had to prepare teaching materials for an in-house training course, which took at least one to two full working days from researching the topic, organizing the structure, writing content, designing briefings and making handouts. What now? I explain the course objectives, student backgrounds, and expected learning outcomes to AI, and it can help me produce a first draft of a course syllabus and textbook with a complete structure and solid content in a very short period of time. My role changed from a creator from scratch to a quality control and strategic planner in an instant.
What surprises me even more is that what AI now demonstrates is not just execution, but something close to taste and judgment. This is exactly what Matt Schumer emphasized in this article - the latest model gave him an unprecedented sense of judgment and taste.
When I discuss the structure of an article or the design of a class with AI, the suggestions it gives become less and less like a mechanical response and more like an experienced colleague discussing with you.
To be honest, I feel the same way myself. When I discuss the structure of an article or the design of a class with AI, the suggestions it gives become less like a mechanical response and more like an experienced colleague discussing with you. It takes into account the needs of the audience, notices logical coherence, and even makes points that I hadn’t thought of in some places.
This change is both an opportunity and a warning to me.
Those who think AI is not that powerful
Matt Schumer wrote a particularly good paragraph in his article. He said that he often hears people say: “I have tried AI, and it’s not that great!”
His response was straightforward: If you tried ChatGPT in 2023 or early 2024 and thought it would write things randomly and be unreliable - well, you are indeed right. Large language models at that time did have a lot of limitations, but that was already two years ago. In the timescale of AI, that’s enough to be classified as ancient history.
When I read this paragraph, I was so touched.
I encounter this situation almost every week in my classes, speaking engagements, or in conversations with corporate clients. Some people will say: “I have used ChatGPT, and the things it writes are very standard and useless!” Or: “You can tell at a glance that the articles written by AI are full of AI feeling and have no warmth at all.”
To be honest, these reviews were correct at one point in time. But the problem is that many people’s cognition is still stuck at that point in time, and AI has been running forward for several generations.
Matt Schumer pointed out a very critical issue. Most people use free versions of AI tools. The capabilities of the free version lag behind the paid version by at least a year. Using the free version of ChatGPT to evaluate the current state of AI is like using a feature phone to evaluate the development of smartphones. The impression you get is far from reality.
I always encourage my students: If you really want to see what AI can do, spend at least $20 a month to subscribe to Claude Pro or ChatGPT Plus and use the latest models. Don’t use the preset models, choose the strongest one yourself. The strongest options right now are ChatGPT’s GPT-5.2 and Claude’s Opus 4.6 - but that answer changes every few months. More importantly, don’t just use AI as a search engine. That’s the mistake most people make. I know that many friends still use it as Google and think “it’s very common to use, nothing more.” But the real usage is to bring it into your actual work. For example, you could give it a contract with sensitive information removed and ask it to find all the clauses that disadvantage your client. Or throw it a bunch of messy data and ask it to build a model. Throw it your team’s quarterly data and ask it to figure out the story.
When you bring AI into actual work, you will truly understand why so many people in the technology circle are sounding the alarm.
When you use it this way, you will truly understand why so many people in the technology circle are sounding the alarm.
Where to go? An educator’s thoughts
After reading this article written by Matt Schumer, I thought about it for a long time.
As a university lecturer who teaches AI applications in Taiwan, as a practitioner who has long been engaged in corporate consulting and training, and as a doctoral student who is studying AI application issues, I have some deep feelings that I want to share with you.
**First, we need to redefine the meaning of professionalism. **
In the past, our definition of profession was largely based on the scarcity of knowledge and the irreplaceability of skills. Lawyers are respected because legal knowledge is too complex for most people to master on their own. Doctors are irreplaceable because diagnosing disease requires years of training and experience.
But when AI can read entire legal databases in seconds, analyze images and examine reports with an accuracy that approaches or even exceeds that of human experts, the foundation of scarcity of knowledge is likely to be shaken.
▲ The core of professionalism is changing from “what I know” to “what can I do with what I know”.
Of course, this does not mean that professionalism is not important. Rather, the core of professionalism is changing from what I know to what can I do with what I know, what questions can I ask, and what judgments can I make?
Future professionals must develop across fields. For lawyers, the lawyer who can continue to thrive in the AI era in the future will not be the lawyer who memorizes the most laws and cases, but the lawyer who best knows how to use AI to create value for clients and who can make judgments at critical moments that cannot be replaced by AI.
**Second, the education system needs fundamental changes. **
Matt Schumer mentioned something at the end of the article that particularly struck me: “Rethink what you say to your children.”
He said that the traditional path to higher education—getting good grades, going to a good university, and finding a stable professional job—this script points to precisely those positions that are most likely to be impacted by AI.
▲ The traditional education path points to those positions that are most likely to be impacted by AI.
In the past few years, I have often had the opportunity to teach AI Application courses in some universities. Every semester, I face a group of young people who are about to enter the workplace. I often wonder: Will what I teach them still be useful two years after they graduate? What about three years from now? What about five years from now?
The answer is: If all I teach is how to use a particular AI tool, it probably won’t be long before it becomes obsolete. But if I teach how to think, how to ask questions, how to evaluate the quality of AI’s output, and how to find one’s place in human-machine collaboration, then those capabilities have a much longer shelf life.
Therefore, this is what I have been emphasizing in class: don’t just learn tools, learn thinking. Tools may change from time to time, but your way of thinking will be your lifelong asset. This point I made in Stop Chasing Tools! Build your “unbeatable system” in the AI era This article also has a detailed discussion.
**Third, we need to build a new occupational safety net. **
Matt Schumer cited Anthropic CEO Dario Amodei’s prediction that AI will replace 50% of entry-level white-collar jobs within one to five years. Moreover, many people in the industry believe that this forecast is quite conservative.
If this number is true, its social impact will be huge. Our current education system, job coaching mechanisms, and social safety net are not designed for change of this scale and speed.
I am not a policy expert, but I think this is an issue that the entire society needs to face seriously. In Taiwan, we need to think about: How to help those whose jobs have been affected find their way back? How to establish a more flexible lifelong learning mechanism? How to ensure that the productivity improvements brought by AI can be distributed more equitably, instead of being concentrated in the hands of a few people?
How do ordinary people survive in the AI era? My ten suggestions
Okay, after talking about so many macro things, let me give you some more specific and practical suggestions! These are ten survival guides for the age of AI that I compiled based on my own experiences and observations.
▲ Facing the AI wave, we need specific action guidelines, not just anxiety.
**1. Start using AI seriously now, don’t wait any longer. **
I know this sounds like a cliché, and I don’t mean to say it because I teach AI applications myself. I have to put it more bluntly: If you’re not using AI at work by now, you’re already falling behind. Not that we are about to fall behind, but that we have already fallen behind.
Yes, I know that every job has different attributes and formats. But if possible, I hope you don’t wait until the company requires you to use it, don’t wait until all your colleagues are using it, and don’t wait until AI becomes more mature before you use it! Get started now. Spend $20 a month to subscribe to the best tool, and then spend at least half an hour a day seriously putting it to work.
If you spend an hour a day experimenting with AI for six months, your understanding of the future will be better than 99% of people.
Matt Schumer said it best: If you spend an hour a day experimenting with AI for six months, your understanding of the future will be better than 99% of people. Honestly, that’s not an exaggeration. Because literally no one is doing it right now, and the barrier to entry is incredibly low.
**2. Learn to ask questions. This is the most important skill in the AI era. **
No matter how powerful AI is, it still needs humans to tell it what to do? Think about the recent smash hit lobster (OpenClaw), the same is true. And telling it what to do is actually an art - we call it prompt engineering, but I prefer to think of it as the art of asking.
A good question can make AI produce amazing results. On the other hand, a bad question will make you think that AI is nothing more than that. The difference is not in the AI, but in you.
In my classroom, I spend a lot of time teaching students how to ask questions? How to give AI enough background information, how to clearly describe the desired output, how to set roles and situations, and how to iterate and revise. These seem to be skills in AI application, but they are actually more fundamental training in thinking and expression.
**3. Don’t treat AI as an enemy, treat it as your strongest teammate. **
I see that many people have a defensive attitude towards AI: “Will AI replace me?” “Can AI do as well as me?” This mentality is completely understandable, but it won’t help you.
A more constructive way to think about it is: If AI can handle the repetitive, time-consuming parts of my job, then I can focus my energy on more valuable things.
Think of AI as an extremely capable assistant that never gets tired and is always available. It can help you do research, organize information, write first drafts, analyze data, and produce plans. And what is your role? He is a strategic thinker, quality controller, relationship operator and final decision maker.
Human-machine collaboration is not a zero-sum game. It is the multiplier effect of one plus one which is greater than two.
Let me share with you a real case: I often talk about AI with my mother and show me the many projects that AI has helped me execute. Over eighty years old, she has gone from rejecting and resisting AI at the beginning to now being able to understand AI.
**4. Develop abilities that cannot be automated. **
What can AI not do well yet? In other words, even if AI can do something, humans still have an advantage?
I will list a few directions for your reference:
Deep relationships and trust building. AI can certainly write a perfect email, but the trust you have built with your customers over the past decade cannot be automated.
Jobs that require a physical presence, such as surgery, construction, nursing, or site management. In these areas, AI and robots will still need a period of time to fully intervene.
A role that requires legal responsibility. Be it a lawyer who needs to sign, a doctor who prescribes a prescription, or an accountant who does an audit—there is a legal and institutional framework behind these roles, and it is not just a technical issue.
However, as Matt Schumer reminds: these are not permanent shields. They just buy time. Time is only valuable when you use it to adapt and prepare.
**5. Build your personal brand and unique perspective. **
In the AI era, being able to do something is no longer a source of differentiation. Because AI can do almost anything. The real difference lies in who you are and how you see things?
Your experiences, your perspectives, your stories, the way you connect with people—these are things that AI can’t replicate. In a world where everyone can use AI to produce high-quality content, people will be more eager to meet real people, warm perspectives, and unique insights.
Having said that, this is why I keep emphasizing the importance of personal branding. It’s not asking you to become an Internet celebrity, but asking you to consciously manage your professional reputation, your unique positioning, and your irreplaceability in your field.
▲ In the AI era, your unique perspectives and experiences are the most irreproducible assets.
**6. Manage your finances and build a buffer. **
Matt Schumer has a very pragmatic piece of advice in the article that I think is worth mentioning: sort out your financial situation and build flexibility.
If you believe, even partially, that your industry is likely to take a major hit in the next few years, basic financial resilience is even more important than it was a year ago. Do increasing your savings where possible, being cautious about new debt assuming your current income won’t change, and also thinking about your fixed expenses, give you flexibility? Or has it locked you up?
I’m not a financial advisor, but this is common sense. In times of change, each of us needs to leave room for ourselves.
**7. Become an AI translator. **
There is a huge opportunity now to become an AI translator—that is, someone who can bridge the gap between the capabilities of AI and the needs of ordinary people.
It’s not that many people don’t want to use AI, but they don’t know how to use it, where to use it, or how to evaluate the output of AI. If you can be the person who helps them understand, import, and optimize their AI workflows, your value will only increase. Having said that, this is one of the things I am currently doing. Whether in college classes, corporate training, or in my own community, my role is that of an AI translator—translating complex technical trends into language that ordinary people can understand, and transforming abstract possibilities into concrete action steps. This is similar to the concept I shared in Turn-teaching-into-long-term-assets-with-ai.
**8. Keep learning, but learn the right things. **
In the age of AI, continuous learning is a must. But the more important thing is to learn the right things.
Don’t spend too much time learning the ins and outs of a particular tool—that stuff becomes too fast. A popular tool today may be replaced by something better in six months.
What we should learn is: underlying thinking framework, problem-solving methodology, cross-field connection ability, critical thinking, interpersonal communication and leadership. These things are capabilities that will not become obsolete due to the update of a certain AI model.
Of course, you also need to remain sensitive to new tools and be willing to keep trying new things. But trying out new tools is the means; building resilience is the end.
**9. Find your passion that cannot be automated. **
Matt Schumer wrote a passage in the second half of the article that particularly moved me. He said, “Your dream just got closer.”
If there’s something you’ve always wanted to do but couldn’t do it because you lacked the technical skills or funding, that barrier is now essentially gone. You can use AI to create a working app prototype in an hour. You can collaborate with AI to write a book. For just $20 a month, you can get the best personalized tutoring in the world.
For example, I recently held two classes of Vibe Coding Practical Workshop, helping 20 professionals (such as lawyers, writers, university professors, music teachers, insurance agents, etc.) who had no programming foundation to design their own web page works in a short period of time.
In a world where old career paths are being upended, someone who takes the time to pursue something they are truly passionate about may end up in a better position than the person who spends the same amount of time clinging to a job description that is about to disappear.
I was really touched to see this group of professionals learning Vibe Coding from scratch. In a world where old career paths are being upended, someone who takes the time to pursue something they are truly passionate about may end up in a better position than the person who spent the same amount of time clinging to a job description that is about to disappear.
All this in front of me gave me great inspiration. I have always believed that in this era of change, the safest strategy is not to be defensive, but to be offensive—to pursue what you truly care about, using AI as your accelerator rather than seeing it as a threat to you.
▲ In this era of change, the safest strategy is not defense, but offense.
**10. Develop the habit of adapting. **
This is perhaps the most important one. Matt Schumer said it best: What matters is not which specific tool you master, but your muscle memory for learning new tools quickly.
AI will continue to change, and fast. Today’s large language models will likely be obsolete in a year. Therefore, today’s workflow also needs to be constantly redesigned. The people who will emerge victorious from this wave are not those who are proficient in any one tool, but those who are comfortable with change itself.
It is really important to develop the habit of experimentation. Try something new while your current approach still works. Habits become beginners again and again. This adaptability is the closest thing we have to a lasting advantage.
Don’t just look at technology, but also look at people
Writing this, I want to say something that was not particularly emphasized in Matt Schumer’s article, but that I think is extremely important.
In all discussions about AI, it is too easy for us to fall into purely technical thinking - which model is stronger, which feature is newer, which industry will be replaced the fastest? But we often forget that at the center of it all, are people.
▲ In all the discussions about AI, we must not forget: at the center of it all, there are people.
It was the 55-year-old accountant who had been in the profession for thirty years who was suddenly told that his profession might be automated within a few years.
It was that young man who had just graduated from college and entered the workplace with great expectations, only to find that the world they had been preparing for for sixteen years since elementary school no longer existed.
That’s the small business owner who doesn’t know if she should invest in AI tools because she doesn’t even know what they are yet.
Technological progress is neutral, but its impact on people is real, concrete, and sometimes painful. As an educator and consultant, I feel it is my responsibility not only to convey the message “AI is great, learn it quickly”, but to accompany people through this transformation process. Tell them that fear is normal, uncertainty is normal, and feeling like they are falling behind is normal. Then, tell them gently but firmly: You can do it. Just take it step by step.
Technological progress is neutral, but its impact on people is real, concrete, and sometimes painful.
This is what I’ve been doing. From “Vista Writing Companion Program” to “Content Hacker” to “AI Use it well” From university classes to corporate training, what I do is not only teach technology applications, but also accompany people. Accompany them to face changes and find their place in the new world.
Look at the other side positively
Matt Schumer also mentioned in the article the huge positive possibilities brought by AI, and I think these are also worthy of our time to imagine and look forward to.
AI can compress a century of medical research into ten years. Things like cancer, Alzheimer’s disease, infectious diseases, even aging itself - these researchers truly believe these problems can be solved in our lifetimes.
AI can truly democratize education. A child living in a rural area can receive the same quality of personalized teaching as a child living in Taipei.
AI can significantly lower the threshold for creation. Those who have a good story to tell, a good idea to share, or a good product to develop no longer need large amounts of capital and technical teams to realize their dreams.
These visions are not utopias. They are happening. And we have the honor—and the responsibility—to stand at this turning point in history to shape the direction of these possibilities.
Big things are happening indeed
The title of Matt Schumer’s article said “Big things are happening.” He’s right. Big things are indeed happening.
But I would add this: great things happen all the time. Every major technological revolution in human history has been accompanied by fear, chaos, and pain. Printing put scribes out of work. The Industrial Revolution replaced craftsmen. The Internet has subverted countless traditional industries.
Every time, someone predicts the end of the world. Each time, humans survived and created new prosperity.
Is it different this time? Maybe yes. What’s special about AI is that it is the first technology that has the potential to replace human cognitive abilities. Its reach was wider and faster than any revolution before it.
But I still choose to remain cautiously optimistic.
Of course, not because I blindly believed everything would be fine. It’s because I believe in human adaptability—when we see problems, face them, and deal with them seriously, we are much more capable than we think we are.
The point is, you can’t keep pretending that nothing happened. You can’t wait until the waves hit your feet before you start learning to swim.
Start now. Not tomorrow, not next month, now.
After reading Matt Schumer’s article, if you only remember one thing, I hope it is this:
Get started now. Not tomorrow, not next month, now.
Open Claude or ChatGPT, throw it your biggest work problem today, and see what it gives you? Don’t just take it all, it’s more important to think carefully and then take action. Regardless of the final outcome, you have taken the most important step. Yes, you’ve already started.
In this era when big things are happening, taking action is your greatest advantage.
And I will be here to continue walking this road with you.
Extended reading:
- Stop chasing tools! Build your own “unbeatable system” in the AI era
- The command economy is coming! AI is rewriting the rules of the business game in the next decade
- AI transformation course for middle and high school students: Your experience is the most lacking golden asset of AI
- Vibe Coding drives marketing superpower: Let AI be your digital creative partner
- Lecturer’s Digital Asset Management Technique: Use AI to turn teaching experience into a compounding system
External Resources:
- Something Big Is Happening — Matt Shumer
- AI development triggers polarized discussions - Central News Agency
📖 深入探索相關主題