In this assessment, you will be provided a simplified version of a payment gateway dashboard source code.
We reduced the complexity of the code but at the same time introduced intentional issues across the codebase.
You will need to use AI coding agents (Claude Code, Codex, Cursor, etc.) to study the existing codebase,
understand the project flow, identify issues, and fix accordingly.
This assessment covers the full Software Development Lifecycle (SDLC) and is designed to evaluate
your ability to effectively direct and validate AI agent output:
(Planning & Analysis, Development, Testing, Code Review, Debugging, Deployment, Documentation).
We expect you to use AI coding agents (Claude Code, Codex, Cursor, etc.) throughout this assessment. Your ability to effectively direct and validate AI agent output is what we're evaluating.
You must submit your prompt history and AI output logs for each challenge (see Compulsory section below).
In short you will have:
- 3 x entry challenges (Setup)
- 8 x main challenges (Development, Testing, Review, Deployment)
- Bonus challenges
- Compulsory deliverables (Prompt logs, AI Retrospective, Git commits)
You will be given 3 days to complete the assessment.
Here's some brief relationship info between modules that will help you understand the code better:
Recover/update the password to get into the dashboard
Challenges
Study the code & figure out how to do CRUD on payment account via API. Use ag-dashboard's group (MCP) in the API call for testing and document it (Best to attach postman screenshot / url).
DEVELOPMENT
This payment account API above have CORS issue when embed as AJAX on frontend, apply a fix to it.
DEBUGGING
Create a hourly report with filters to highlight success / fail transaction (job), refer to the video below. You can test the report based on 25-05-2021 ~ 01-06-2021 data.
DEVELOPMENT
Chart Filter Requirement
Date
Group
Bank
Job Type
Make chart's point clickable, link it to corresponding job (show job view in _blank page).
DEVELOPMENT
Use an AI agent to scan the codebase for bugs, logic errors, and security issues. Document your findings clearly:
what the AI agent found vs. what you found by manually reviewing the AI's output.
CODE REVIEW There are multiple intentional bugs embedded in the codebase. The more you find, the better.
Write a test suite for core modules (API endpoints, model relationships) using AI agents.
Must include at least one edge case the AI initially missed that you caught and added manually.
TESTING Show us you can evaluate AI-generated tests critically, not just accept them blindly.
Based on the chart you made, make some analysis. Identify fail job without reason, try to investigate via limited info in job log and list down some possible cause & fix (documentation).
DEBUGGINGTips
Containerize the application - Create a Dockerfile + docker-compose.yml using an AI agent, then host it on the cloud (Any platform will do) - Attach the login credential that you recovered from the step above for us to review
DEVOPSDEPLOYMENT
Register an account here https://gitlab.agsmartit.com and upload your modified source code to AG GitLab
Note: After registering, please ping us to approve your GitLab account before you can push your code.
*DO NOT* upload your code to public git, eg: Github, Bitbucket
Your interview will be deemed invalid and forfeited if we find you upload the code to other platform.