FlairX.ai

B2B

AI

SaaS

Startup

Conversational UI

0 to 1

Web

Reimagining Questionnaire Builder with an AI Copilot

Context

FlairX is an Interview-as-a-Service platform that helps companies scale technical hiring by outsourcing candidate interviews to vetted human experts. It offers end-to-end capabilities including job post creation, interviewer-candidate matching, and feedback workflows. While human-led interviews remain the USP, manual operations were holding back scale.


To address this, we set out to develop AI-enhanced workflows for interview template generation, AI-led interviews, and AI-generated feedback, beginning with the Questionnaire Builder.

My Role

As a UI/UX Designer, I contributed in:

  • UX Research

  • User Flows

  • Ideation

  • Wireframing

  • Mockups

  • Prototyping

  • Stakeholder Reviews

Team

2 UX Designer, 1 Product Manager, 3 Developers, 1 CEO

Tools

  • Figma

  • Miro

  • Canva

  • Notion

  • Google Docs

Timeline

2 weeks from getting the requirements till delivery

Skip to Iterations & Prototype

Overview

Problems & Opportunities

Manual Bottleneck

Admins were using ChatGPT externally to generate interview questions based on the job description, then manually entering them into FlairX.

What if we could

Leverage AI to generate editable question sets and skill categories from job descriptions?

Job Description Quality

Most job descriptions lacked structured details like skill priorities, time allocation, or clear must-have skills.

What if we could

Assign weightage and time allocation dynamically?

Inefficiencies

The manual process caused inconsistencies, misaligned interview focus, and poor candidate evaluations.

What if we could

Let users prompt AI to improve or customize questions?

Scalability

With growth, maintaining quality across hundreds of job roles and interviewers became unsustainable.

What if we could

Reduce turnaround time from 2.5 days to < 1/2 hour?

Research

Observations & Stakeholder Insights

As we dove into the existing template and questionnaire workflow, we spoke directly with the CEO, observed operations in real-time, and surfaced core pain points that were slowing down scale, damaging confidence, and compromising quality.

Takeaways from discussions with internal teams

Original manual flow & why it wasn't working

Key Takeaways:

  • Despite having a growing library of “master” and “company-specific” templates, stakeholders consistently avoided reusing them. Templates were perceived as generic, outdated, or misaligned with current role demands. They often started fresh or heavily modified existing ones, defeating the purpose of reuse.


  • Clients expected custom-crafted, high-difficulty questions that reflected their domain and hiring bar. Repetition, generic prompts, and low-complexity questions diminished trust and led to dissatisfaction.


  • There was no clear tagging, hierarchy, or smart recommendation system. Admins and interviewers struggled to find suitable question sets and often fell back on manual curation.


  • The system lacked intelligent assistance to help teams craft high-quality, unique questions or guide difficulty levels, resulting in time-consuming manual effort and quality inconsistencies.

Competitive Analysis

I studied 10+ players in the domain, specifically looking for what kind of experience they are providing. Key takeaways:

Platform

Strengths

Weaknesses

Notes

Video-based screening with AI scoring and facial analysis

Concerns around transparency, bias, and candidate trust

Over-reliance on visual/emotion cues

Human-led technical interviews with structured feedback systems

Doesn’t scale well for async or AI-first approaches

Heavily human-dependent

Real-time interview recording with live AI notes and recruiter insights

Lacks customizable templates or prompt-based question generation

Good foundation for AI note-taking

Structured, expert-led interviews, AI scoring and alerts

Limited question flexibility, rigid flow

Strong on rubric adherence, weak on customization

Automated proctoring, AI-based assessments, and plagiarism detection

UI not intuitive, limited feedback insights for interviewers

Great for coding but weak on collaboration

Structured interviews, async support, ATS integration

Lacks AI-powered question generation and deep customization

Not designed for AI-enhanced flows

Real-time AI-generated interview notes and summaries; elegant UX

Lacks customization and template-driven workflows for different roles

Strong on passive AI capture, weak on AI-driven prep

Key Takeaways:

  • No platform provided prompt-driven question building with smart refinement tools.
  • Most lacked human-in-the-loop design for feedback control, transparency, and trust.
  • Metaview and BrightHire are excellent at capturing interviews, but lack depth in AI-driven question creation.

  • Our opportunity lied in building a modular, AI-assisted flow that balances automation with editorial control, ensuring questions are aligned to job descriptions and easy to personalize.

Desk Research

Before designing AI-powered features, we reviewed laws, ethics, and product patterns to guide responsible implementation.

What the Law Says

  • AI must be disclosed to candidates upfront (e.g., NYC law).

  • Bias audits are mandatory for AI hiring tools.

  • Candidate consent is required if AI is used for evaluation.

Design Move

We used labels, tooltips, and edit controls to make AI involvement transparent and keep interviewers in charge.

What Ethics Recommend

  • AI should support, not replace human decisions.

  • All outputs must be editable, explainable, and rated for confidence.

  • Transparency should be built into the interface.

Design Move

We added “Why this suggestion?” links and visible confidence notes for all AI-generated content.

What Others Are Doing

  • Platforms like Metaview and BrightHire use AI to assist, not take over.

  • Human-AI collaboration is the new standard.

Who are

the users?

Currently, the tool is used by FlairX’s operations and admin teams to create and manage interview questionnaires. Over time, it will also be available to client-side HR and recruiters to tailor assessments for their roles, and to interviewers, both internal and third-party, for review before interviews.

Navigating

Complexity

Navigating complexities through ongoing collaboration and a phased delivery approach helped us focus on what mattered most.
The following are some of the complexities we dealt with:

Novelty

This was the team’s first deep AI-led UX initiative, which made it difficult to predict what was feasible and how fast.

Timeline

Time was a constraint, we aimed to deliver the MVP in just one month, requiring design, engineering, and product to work in parallel.

Feasibility

There was extensive back-and-forth between design and engineering to validate what could realistically be built and how.

Collaboration

We held long cross-functional sessions to align on what mattered most, how to phase features, and how to make trade-offs.

Scope

Requirements were dynamic, evolving with technical feedback, legal considerations, and founder inputs.


Compliance

AI ethics and legality added another layer of complexity, we had to continuously research evolving AI laws and design for compliance.

Design &

Iteration

Designing an AI-driven flow for interview templates came with unique complexities. We were shaping an intelligent, assistive experience that needed to balance automation with human control. Our journey unfolded in phases:

It was a complex flow with so many moving parts, so we began with mapping

the user flow

Things had to move quickly, so we created quick wireframes and iterated upon it

within a day

Four days & four rounds of ideation, designing and feedback led to the creation

of the AI copilot

Final Screens

A Glimpse of the Final Prototype

My

Learnings

Working on this flow pushed me to balance ambition with practicality. I had to make design decisions knowing that not everything could be built at once, and that clarity for the user was more valuable than packing in every feature.

I learned how much smoother things move when you keep communication open with both technical and non-technical teammates, and how breaking down work into smaller, testable phases helps everyone stay aligned.

More than anything, I saw the value of designing with the future in mind, creating something that works well now but can grow without losing its simplicity.

Back to Top

SaaS

Janhvi

Get in touch!

Open to chat & collaborate.

Conclusion

This project demonstrated the power of human-centered design in addressing real-world challenges at the IMPACT pantry. By combining a physical categorization system with a streamlined digital platform, we significantly improved inventory visibility and task coordination—reducing item search time by 70% and increasing volunteer autonomy. Usability testing confirmed strong engagement across all age groups, with a SUS score of 71.7.

Looking ahead, we envision refining the system with multilingual support, offline capabilities, and real-world deployment to ensure broader accessibility and long-term adaptability in diverse pantry settin

AI

Startup

0 -> 1