Author: hisham

  • Agentic AI for Everyday Use

    Agentic AI for Everyday Use

    Artificial Intelligence has shifted from a reactive chatbot to a proactive assistant. But what exactly is “Agentic AI”, and how can it change your daily workflow?

    Moving Beyond the Chat Box

    For the past year, we’ve interacted with AI mostly through a chat interface. You type a prompt, it responds, and then it waits for your next instruction. This is helpful, but it still requires continuous human direction.

    Agentic AI represents the next leap. Instead of just answering questions, an agent can take an objective, break it down into steps, and execute those steps across multiple tools—all with minimal supervision.

    > Key takeaway: A chatbot talks to you. An AI agent does work for you.

    Everyday Scenarios

    Here is how agentic AI is starting to appear in our everyday tools:

    1. Email Triage and Drafting: An agent can read incoming emails, categorize them by urgency, and draft replies based on your past correspondence style. You simply review and hit “send.”
    2. Calendar Tetris: Instead of asking “when are you free?”, an agent can negotiate with another person’s agent to find a mutually agreeable time, book the meeting, and send out the invites.
    3. Research Synthesis:* When you need to understand a new topic, an agent can search the web, read multiple articles, synthesize the core arguments, and present you with a formatted summary document.

    The Human in the Loop

    The rise of agentic AI does not mean handing over the keys completely. The most effective systems operate with a “human in the loop” methodology. The agent does the heavy lifting, but the human sets the boundaries, provides course correction, and gives final approval for critical actions.

    As these tools become more integrated into our operating systems and daily apps, the skill of managing an AI agent will become just as important as knowing how to prompt a chatbot.

    🚀 Ready to Bring AI Into Your Organisation?

    Learn how to safely deploy Agentic AI tools within your team’s workflow.

    Book an AI Readiness Session →

    TW

    ThriveWorks

    Empowering organizations to thrive in the age of AI through ethical, human-centered technology.

  • The ROI of AI: How to Measure What It Actually Saves You

    The ROI of AI: How to Measure What It Actually Saves You

    Generative AI is everywhere, but most organisations struggle to quantify its actual return on investment (ROI). If you are paying for ChatGPT Enterprise, Microsoft Copilot, or custom solutions, how do you know it’s actually working?

    Beyond “Hours Saved”

    The biggest trap in the current tech landscape is measuring AI purely by time saved. If a tool saves an employee 4 hours a week, but they use that time to scroll social media or perform low-value administrative tasks, the overarching business ROI is functionally zero.

    We need to stop measuring stopwatches and start measuring capacity and output quality instead.

    • Are your sales proposals winning at a higher rate because AI gave you better competitive analysis out of the gate?
    • Has your customer response time dropped from 24 hours to 2 hours without sacrificing empathy?
    • Can your operational team now handle 20 large-scale clients instead of 15 without burning out or requiring new hires?

    The 3-Step Operations Audit

    Start by identifying one core workflow—for example, processing complex vendor invoices or writing initial client briefs. Map the exact path it took before AI was introduced to your team.

    “AI doesn’t just speed up bad processes; it forces you to completely reinvent what your baseline standard of excellence is.”

    Measure the new baseline after implementing a Custom GPT or a custom automation pipeline. We consistently see true, exponential ROI when AI is deeply integrated into the specific data of the business, rather than used as a generic, uncontextualised chatbot floating on the side.

    Actionable Next Steps

    Don’t look at the whole company. Pick a single, high-friction bottleneck this week. Build a strict AI workflow for that single task, run it for 14 days, and track the delta in output quality. That is where your true ROI lives.

  • Why Your Team Resists AI (And How to Fix It)

    Why Your Team Resists AI (And How to Fix It)

    You bought the premium licenses. You sent the enthusiastic memo. You even hosted a 60-minute orientation. But three months later, nobody is actually using the AI tools in their daily workflow. Why?

    Fear of Replacement

    The baseline human reaction to automation is fear. If an AI can write a comprehensive quarterly report in 10 seconds that normally takes an analyst 5 diligent hours, the analyst doesn’t see a productivity boost—they see an existential threat to their job security.

    Leadership must aggressively, consistently, and transparently pivot the narrative from replacement to augmentation. AI isn’t here to do the job; it is here to do the rote busywork so the human professional can do the high-value strategic thinking that actually gets them promoted.

    The Learning Curve Fatigue

    People are exhausted from learning new enterprise software. The key to fixing resistance is integrating AI directly into the tools they already use—like Slack, Microsoft Teams, or their email client—rather than forcing them to open a new browser tab and learn how to “engineer prompts.”

    “The best AI tool is the one that feels like an invisible colleague, not a new software manual.”

    How to Drive Adoption

    • Find Internal Champions: Identify the 10% of staff who are naturally curious. Train them intensely, let them brag about their time savings, and let the FOMO (Fear Of Missing Out) drive organic adoption.
    • Ban Generic Prompts: Give your team highly specific, 1-click templates for their exact roles. Don’t make a marketer figure out how to prompt; hand them a prompt tailored to your brand voice.
    • Reward Human Empathy: Explicitly state that AI generates the draft, but the human sets the soul. Reward employees for adding incredible, empathetic human touches to AI-generated foundations.
  • Prompt Engineering is Dead. Long Live Context Engineering.

    Prompt Engineering is Dead. Long Live Context Engineering.

    A year ago, everyone was utterly obsessed with “Prompt Engineering”—the prevailing idea that if you just phrased your request nicely enough, used the exact right adjectives, and threatened to tip the AI $20, it would suddenly produce absolute magic.

    Today, foundation models are so incredibly advanced that semantic prompt engineering is largely a dead science. What matters now, and moving forward, is strictly Context Engineering.

    Data is the New Prompt

    If you ask a raw, untethered AI to write a marketing plan for your business, it will give you generic, Wikipedia-level advice. It doesn’t matter how long your prompt is.

    However, if you feed the AI your last three successful marketing plans, your brand identity guidelines, your CEO’s preferred tone-of-voice document, and your recent customer interview transcripts… and then simply ask it to “write a marketing plan for Q3,” you will receive an output that feels magical.

    “The AI doesn’t need to be told how to write. It needs to be told exactly who you are.”

    Building Your Context Pipelines

    • Audit Your Knowledge Base: What PDFs, documents, or data dumps do new human hires read to understand your company? That is exactly what the AI needs.
    • Vector Databases: Move beyond pasting text. Start looking into systems that allow the AI to actively search your organization’s entire historical database in real time.
    • Dynamic System Prompts: Create permanent backend instructions that force the AI to adopt your business constraints before it ever sees a user’s question.

    The fundamental skill has completely shifted from “talking nicely” to the AI, to architecting the exact data pipeline the AI needs to understand your highly specific business reality.

  • The New Predator: Why AI is Dangerous for Seniors

    The New Predator: Why AI is Dangerous for Seniors

    The rapid rise of generative AI has led to incredible medical, diagnostic, and accessibility breakthroughs for seniors. However, it has simultaneously created a terrifying and sophisticated new vector for targeted scams and fraud.

    The Rise of Voice Cloning Scams

    With just a 3-second audio clip—pulled effortlessly from a public Facebook video or an old voicemail—scammers can now perfectly clone the voice of a grandchild or loved one. They use this synthetic, cloned voice to call elderly relatives in the middle of the night, frantically claiming to be in a life-or-death accident, arrested, or stranded, desperately needing money wired immediately.

    Because the voice sounds identical—complete with panicked breathing patterns—the senior citizen has virtually no psychological defense mechanism against it.

    Deepfaked Authority Figures

    Beyond family members, scammers are deeply weaponizing AI to impersonate authority figures. Hyper-realistic video avatars of bank managers, IRS agents, or local police officers are being used in localized phishing campaigns. These campaigns are no longer broken-English emails; they are perfectly written, contextually aware, and emotionally manipulative scripts written by advanced language models.

    “The concept of ‘seeing is believing’ is fundamentally broken in the generative AI era.”

    Safeguarding Our Most Vulnerable

    We need massive, robust community education. Technology alone cannot solve a human trust problem. Families must implement analog safety nets immediately:

    • Family Safe-Words: Establish a unique, offline safe-word that family members must provide over the phone if they are ever asking for financial help.
    • Secondary Verification Channels: Hang up and immediately call the person back on their known, saved contact number, or text a mutual family member.
    • Digital Footprint Auditing: Help older relatives lock down their social media profiles to prevent scammers from scraping their relationship data to use as context for attacks.

    We must aggressively teach vulnerable populations new digital defense mechanisms before these technologies become universally ubiquitous.

  • Responsible AI: Your Daily Ethics Checklist

    Responsible AI: Your Daily Ethics Checklist

    Ethics in Artificial Intelligence is entirely too often treated as a philosophical or academic exercise. But for modern organizations, the reality is stark: poor AI practices directly lead to massive legal exposure, shattered consumer trust, and irreparable reputational damage.

    You do not need a philosophy degree to deploy AI safely. You need a strict, operational checklist that every employee follows every single day.

    The 3-Point Daily Operational Checklist

    1. Data Privacy & Exposure
      Are you or your staff pasting PII (Personally Identifiable Information), HIPAA-protected data, or confidential company secrets into a public language model? Public models train on user inputs. If your employee asks ChatGPT to summarize your Q3 financial strategy, that data may eventually be surfaced to a competitor asking the right questions. Always use Enterprise, ring-fenced tiers or zero-retention API wrappers.
    2. Human-in-the-Loop (HITL) Verification
      Did a qualified human expert physically review this AI-generated content before it was sent to a client, patient, or published live? AI models confidently hallucinate facts, fabricate legal precedents, and invent statistics. The AI is the intern; the human is the executive editor who takes the legal fall.
    3. Bias and Equity Auditing
      If you are using algorithmic models to screen job resumes, approve financial loans, or dictate resource allocation, have you objectively proven that the underlying model isn’t secretly penalising minority groups? Historical training data is inherently biased, meaning the output will be biased unless actively corrected.

    “Responsible AI isn’t an obstacle to innovation; it is the absolute prerequisite for sustainable scale.”

    Don’t wait for comprehensive government regulation to force your hand. Establish your internal acceptable-use policy today. Make it a one-page document, mandate that every employee signs it, and actively audit compliance.

  • School AI Policy Template: Singapore Secondary

    School AI Policy Template: Singapore Secondary

    Banning AI in educational environments is not only functionally impossible; it’s a massive disservice to the students who will soon be graduating into an entirely AI-powered workforce. The cat is out of the bag, and firewalling ChatGPT on school Wi-Fi networks fundamentally misunderstands the reality of student access.

    Instead, progressive schools in Singapore need crystal clear, actively enforced Acceptable Use Policies. Students must be deliberately taught how to use AI as a sparring partner, while understanding exactly where the academic line is drawn.

    The Safe AI Framework for Classrooms

    A modern policy should classify AI use into Red, Yellow, and Green zones:

    • Green Light (Encouraged): Using AI for brainstorming essay topics, generating customized study flashcards, debating historical counter-factuals, or simplifying complex concepts (e.g., “Explain quantum physics to a 10-year-old”).
    • Yellow Light (Supervised): Using AI to structure an essay outline or check for grammatical flow prior to submission. This requires strict citations acknowledging the AI’s assist.
    • Red Light (Banned): Using AI to generate final copy, solve algebraic equations without showing the step-by-step logic, or citing AI-hallucinated facts.

    Integrating into the Grading Rubric

    We cannot assess students based on out-dated metrics. Assessments must aggressively shift away from “what you know” (which an AI can recite perfectly in 12 seconds) to “how you think and critique.”

    “If an assignment can be completed entirely by a single prompt, the assignment is broken, not the student.”

    We strongly recommend moving towards oral presentations, in-class synthesis writing (with devices closed), and advanced projects where the student’s prompt-iteration process is actually graded alongside the final output. Teach them that AI is a highly capable intern that requires brilliant human management.