Synthetic Data, Observability, AI Ethics, Why Python Is Useless Without Understanding Data: What an IBM Board Member With 25 Years in Tech and the CTO of Societe Generale Told Parul University Students About Enterprise AI.

AI Session with Dr. Kiran (Board Member, IBM, 25 years) - synthetic data, Mr Satyan Pathak (CTO, Societe Generale) - data quality as the foundation of AI.

Dr Kiran, IBM: 25 Years of Watching Technology Transform

May 1, 2026 | Anjali Shah |

Dr. Kiran has spent 25 years inside the technology industry, not observing it from the outside but operating within it as it reinvented itself repeatedly. He started his career controlling spindle speeds in a textile mill, writing control systems in Assembly and C. If the thread thickness changed, the speed had to adjust. If the speed was wrong, the thread broke and the entire process stopped. That was automation in the late 1990s.

Then came Y2K. Then Java. Then e-commerce when Amazon was just beginning to be understood as a model. Then database administration. Then assignments in the US. And through all of it, automation in some form. By the time he sat with Parul University students in Bangalore, he had seen technologies arrive as revolutions and depart as footnotes, and he could tell the difference between the ones that lasted and the ones that did not.

Synthetic Data: Training AI Without Exposing Real People

This was the section he called his favourite, and the depth showed. He set it up by explaining the fundamental constraint: in finance, healthcare, and any domain handling sensitive personal information, the data you would want to train a model on is the data you are legally not allowed to share. Your Aadhaar number. Your PAN number. Your medical history. Your transaction records. Regulators like the Reserve Bank of India and SEBI have clear guidelines. This is correct and essential. But it creates a practical problem: you cannot train a good model on data you cannot access.

Synthetic data is the solution. You take a small but statistically representative sample of real data. You do not use the actual records. You use the patterns, the statistical relationships, the distributions. You input this summary into AI models like Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs), and they generate synthetic data with the same statistical properties as the real data. The synthetic data can train machine learning models without revealing information about actual individuals.

He described using this in his own work at IBM. In IT operations, if you want to train a model to detect system failures, you need examples of failures. But serious failures are rare. You cannot wait for them to happen. You create synthetic scenarios based on observed patterns and use those to train detection systems.

“If you do not have good data, AI will fail.”

Observability: Seeing What Matters in a Thousand Signals

He asked the room what the difference was between seeing something and observing it. Someone said observing means you are also learning. He said that was close.

He gave the example of managing an airport. IBM manages the IT infrastructure of Kempegowda International Airport through the Airport in a Box platform. The entire system, from power to facial recognition at immigration to boarding systems to air traffic control, runs on technology that must work every hour of every day. No downtime is acceptable. You cannot pause an airport.

Humans get tired. Humans cannot watch a thousand signals at once. Technology gathers information from every server, every network endpoint, every process, and surfaces what matters. He used blood pressure as the analogy: normal is 80 to 120. One day it reads 160. That is an anomaly. You do not panic immediately but you investigate.

In technology: systems are set to a normal range. When there is a deviation, the monitoring system flags it. A human reviews the flag and determines whether it is genuine or false. If genuine, the human investigates. That is observability. You observe. You detect. You act. In that order.

AI Ethics: Bias, Explainability, Transparency, Governance

Someone asked about AI benefits being equally available to everyone. He said the ethical dimensions have to come before the benefits conversation.

  • Bias: a student applies for a loan. Someone at a beach applies. A farmer applies. The AI system must treat all without prejudice. If it rejects an application, there must be a reason the person can understand
  • Explainability: not just a rejection but a reason. That is called explainability, and it is both a technical requirement and a social one
  • Transparency: wherever AI makes decisions, users should know AI is involved. Not buried in terms and conditions. Clearly disclosed
  • Governance: he referenced Europe, where a regulatory body for AI oversight has been established. Certain uses are permitted, certain are not. This governance cannot be left only to industries. Governments must set the framework

“Trust is the whole point. If people do not trust AI systems, they will not use them or will use them anxiously and badly.”

What Companies Should Actually Do

When asked which technology companies should adopt, he declined to name one. It depends entirely on the company’s sector and problems. But he said companies still running on legacy systems from the 1980s and 1990s, built on tools like PowerBuilder that almost nobody uses anymore, are carrying weight that becomes impossible over time. He described the build-leverage-buy framework: what should you build internally, what should you leverage from existing systems, and what should you buy as a service (SaaS)?

The Communication Gap

His most pointed observation was not about technology. It was about the skill engineering programmes undervalue most. He said communication is treated as a single-semester subject. One semester. Often barely taken seriously. It should be mandatory across the entire degree.

“If you have a brilliant idea but cannot present it, cannot articulate it, cannot argue for it in a room full of people contradicting you, then your idea stays in your head. And staying in your head is not what ideas are for.”

Mr Satyan Pathak, CTO, Societe Generale: Data Quality as the Foundation

Mr Satyan Pathak (CTO, Societe Generale Global Solution Centre) approached AI from the financial services perspective, where data quality is not an operational preference but a regulatory requirement.

Data Over Tools

His opening message was direct: Python skills are useless without understanding what the data represents. Students focus on programming. Real value lies in interpretation. He illustrated with carbon-emitting company data: the goal is not to run the code but to analyse investor risk appetite. Code is the tool. Data understanding is intelligence.

Risk in Finance and in Career

He explained risk appetite through concrete examples: a person with Rs 1 lakh and family obligations avoids risk. A wealthy individual or cash-rich company (Infosys, TCS) can absorb losses. Loss-making, high-growth companies (Swiggy, Zomato) represent a different risk profile. Neither is better. Context determines the right choice.

He extended this to careers by describing two paths: the safe path (stable jobs, predictable growth, lower risk) and the exploratory path (untapped problems, higher risk, potential for 10x impact). He used the diamond mine analogy: opportunities in untapped sectors are like hidden diamonds. They require effort, exploration, and patience, but the rewards can be massive and long-term.

He pointed to untapped markets. Nearly 90 percent of the world lives in developing conditions. Sectors like EdTech, HealthTech, and technology access in regions such as Africa, rural India, and Southeast Asia represent large-scale opportunities.Teaching students in Africa from India. Remote healthcare using technology. Solve real problems, get massive scale.

AI Adoption: Organisation-Wide vs Specific Use Cases

He distinguished two AI adoption models. Organisation-wide deployment uses tools like GitHub Copilot, Office AI, meeting summarisation, and code assistance across the entire company. Specific use cases target individual processes: he gave KYC automation as an example, where the traditional process (manual forms, paperwork, human verification) is replaced by document scanning, image-to-text conversion, database storage, and automated verification. The result: faster processing, reduced manual work, improved efficiency.

The biggest challenge in both models: data quality. Wrong phone numbers, incorrect addresses, and incomplete records make the AI unreliable. The principle is universal: garbage in, garbage out. No amount of sophisticated AI compensates for bad data.

His career advice was equally direct:

“Never stop learning. The day you stop learning, you stop growing.”

What Both Sessions Reveal About Enterprise AI

Dr Kiran and Mr Pathak spoke from different institutions (IBM and Societe Generale) but converged on the same structural points:

  • Data quality is the foundation. Both said it explicitly. Without clean, reliable, representative data, AI fails regardless of how sophisticated the model is
  • Ethics and governance are not optional extras. They are prerequisites for trust. And trust is the only thing that makes AI adoption sustainable
  • Communication is the differentiator. Technical skills get you to the table. Communication skills determine whether your ideas travel beyond your own head
  • Continuous learning is not motivational advice. It is operational survival. Technologies that were revolutionary a decade ago (PowerBuilder, early Java frameworks) are now obsolete. The only professionals who remain relevant are those who keep upgrading

Parul University‘s engineering programme builds across all four dimensions. The technical curriculum covers AI, ML, data science, and cloud computing. The Practical Learning Tours provide the industry context where students see these technologies deployed at enterprise scale. The university’s 200+ professors from IITs, NITs, IISc, NIDs, and NIFTs bring research and industry experience into the classroom. And programmes like the AI Tech Tour, where students interact directly with a 25-year IBM veteran and a Societe Generale CTO, provide the communication and judgment exposure that no curriculum alone can deliver.

Will AI Take Your Job? 4 leaders answered the question with different perspectives

Frequently Asked Questions

+ What is synthetic data and why does it matter?

Data generated by AI models (GANs, VAEs) that has the same statistical properties as real data without containing any actual personal information. It solves the problem of training AI models in finance and healthcare where regulations (RBI, SEBI) prohibit sharing real personal data. Dr Kiran at IBM explained that his team uses synthetic data to train failure detection systems when real failures are too rare to provide sufficient training examples.

+ What is observability in IT?

The practice of monitoring thousands of IT signals and surfacing anomalies that require human investigation. Dr Kiran used the airport analogy: IBM manages Kempegowda International Airport's IT through the Airport in a Box platform. Systems run continuously with zero downtime tolerance. Observability means you observe, detect, and act in that order, using technology to handle volume and humans to handle judgment.

+ Why is data quality important for AI?

Both Dr Kiran (IBM) and Mr Satyan Pathak (Societe Generale CTO) stated explicitly that AI fails without quality data. Wrong phone numbers, incorrect addresses, and incomplete records make AI unreliable. Mr Pathak stated that Python skills are useless without understanding what the data represents. The principle: garbage in, garbage out, regardless of model sophistication.

+ What is AI governance?

The framework of rules, oversight, and accountability that determines how AI systems are built and used. Dr Kiran referenced Europe's AI regulatory body, which permits certain AI uses and prohibits others. Governance includes bias prevention (treating all loan applicants fairly), explainability (providing reasons for AI decisions), and transparency (disclosing when AI is involved). He stated that governance cannot be left only to industries, governments must set the framework.

Explore B.Tech at Parul University and Meet The Industry Experts.

Open for admission year 2026-27

Apply now apply
Need guidance? Your PU coach is here! ⚡