That is the question one should ask for ideas you evaluate
Every week, I receive messages from individuals, startups, students, and sometimes entire teams, all eager to pursue an AI-driven idea. Their excitement is amazing. But often, when I ask them a few foundational questions, there is a pause. Many have not taken the first steps of validating whether their idea needs AI, what kind of AI, and whether they are prepared to build it.
This is not a criticism. It is a common pattern in the rapid rise of Artificial Intelligence and especially Generative AI. We often fall in love with a solution before clearly defining the problem.
So here are a few practical guidelines I use when reviewing AI ideas:
Start with a real problem
Is there a strong pain point? Does someone care enough to pay for a solution or urgently want it solved? If not, the idea risks becoming a solution in search of a problem.
Ask whether AI is even required
Not every smart system is an AI system. Classic algorithms, optimization, or rule-based systems are still the right choice in many cases. AI may be excessive or unreliable where deterministic logic performs better.
Study the current landscape
Who is already solving this? What technology are they using? What limitations do they face? This helps avoid reinventing the wheel and identifies opportunities for meaningful innovation.
Check resource feasibility
If the idea requires machine learning, deep learning, or LLMs, do we have the right kind of training data? What about compute and domain expertise? Data quality is often the silent killer of AI projects.
Look at research
Especially if you are in academia, staying aligned with recent literature prevents outdated approaches and sparks new directions.
To make all this less theoretical, I created a small list. Take any of the following broad industry domains and list common tasks, then match them to how they can be solved: ML, DL, LLM, VLM, rule-based, etc. Some examples are below:
- Agriculture: crop disease detection (computer vision), yield prediction (supervised ML), automated irrigation (rule-based)
- Aerospace: fault detection (ML anomaly detection), autonomous navigation (robotics + DL)
- Automotive: driver monitoring (computer vision), traffic forecasting (ML), in-car assistants (LLMs)
- Defense: threat detection (ML/DL), simulations (model-based), strategic planning (LLMs?)
- Manufacturing: predictive maintenance (supervised ML), quality inspection (DL), robotic automation (control systems + AI)
The point is not to be perfect. It is to build awareness about when different AI approaches are suitable. The hype around LLMs and Gen-AI often pushes them into places where simpler tools are more accurate, faster, and cheaper.
Where do we go next?
As AI continues to evolve, we should:
- Focus on hybrid systems combining rules, ML and knowledge
- Strengthen evaluation of feasibility before development
- Invest in domain-specific data and talent
- Encourage community brainstorming and knowledge sharing
- Keep reminding ourselves that innovation is not just technology, but impact
Let us keep asking smart questions. Let us design solutions that matter. And let us make sure we are building AI for real needs, not just because it is trending.
So here is the challenge again: For your domain, what problem should be solved, and what is the most appropriate AI method to solve it? I am excited to see what you come up with.
AI or not to AI was originally published in Google Cloud – Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source Credit: https://medium.com/google-cloud/ai-or-not-to-ai-7e6543a218f5?source=rss—-e52cf94d98af—4
