Home/Templates/Technology Evaluation
Research

Technology Evaluation Prompt Template

Evaluate a technology, tool, or platform against specific criteria with scoring and a clear recommendation.

The Prompt

ROLE: Technology evaluator and procurement strategist who has run vendor selection processes for engineering and operations teams — you know that the right technology decision is not the one with the highest feature score, but the one that best fits the team's capabilities, the organisation's constraints, and the problem's actual requirements. CONTEXT: Technology evaluations fail in predictable ways: over-weighting features at the expense of operational fit, underweighting hidden costs (migration, training, maintenance), and ignoring the risk of vendor lock-in or instability. A good evaluation is honest about what we don't yet know and flags where assumptions are doing the work. TASK: Evaluate the technology or tool below against the specified use case and criteria, producing a clear recommendation with honest risk assessment. RULES: • Scoring must be justified — a score of 4/5 on security must state what specifically earns a 4 and what would be needed for a 5 • Total cost of ownership must include: licensing, implementation, training, integration, and ongoing maintenance — not just subscription price • The "key risks" section must include risks specific to the technology AND risks specific to this organisation's context • Include a "what we don't know" section — honest uncertainty is more useful than false confidence • The final recommendation must make a specific statement: adopt, adopt with conditions, pilot first, or do not adopt — with the primary reason CONSTRAINTS: Distinguish between current state (what the tool does today) and vendor roadmap claims (what they plan to do — weight these at 50% until delivered). Flag any score that is based on vendor-provided information rather than verified third-party evidence. EDITABLE VARIABLES: • [TECHNOLOGY_TOOL] — the specific technology, tool, or platform being evaluated • [USE_CASE] — what problem this tool is being evaluated to solve • [EVALUATION_CRITERIA] — the dimensions that matter most for this decision (or use the defaults) • [ORGANISATIONAL_CONTEXT] — team size, existing tech stack, budget range, timeline • [ALTERNATIVES_CONSIDERED] — other tools being evaluated (for comparison) OUTPUT FORMAT: **Evaluation: [Tool Name] for [Use Case]** **Scoring Summary:** | Criterion | Weight | Score (1–5) | Weighted | Justification | |-----------|--------|-------------|---------|--------------| | Functionality fit | X% | | | | | Scalability | X% | | | | | Security & compliance | X% | | | | | Vendor stability | X% | | | | | Total cost of ownership | X% | | | | | Integration complexity | X% | | | | | Learning curve | X% | | | | | **TOTAL** | 100% | | | | **Strengths:** [3 specific, evidence-based strengths] **Key Risks:** • Technology risk: [text] • Organisational fit risk: [text] • Vendor/market risk: [text] **Total Cost of Ownership (3-year estimate):** [Licensing + implementation + training + maintenance = total] **What we don't know:** [Specific uncertainties that affect the decision] **Comparison to alternatives:** [Brief comparison to [ALTERNATIVES_CONSIDERED]] **Recommendation:** [Adopt / Adopt with conditions / Pilot first / Do not adopt] **Primary reason:** [The single most important factor driving the recommendation] **Conditions (if conditional):** [What must be true before full adoption] QUALITY BAR: A technology decision-maker who uses this evaluation should feel they can defend the recommendation to a sceptical colleague — not because the score is high, but because every dimension of the decision is transparent and the risks are honestly stated.

Make it specific to you

PromptITIN asks a few questions and builds a version tailored to your use case.

✦ Enhance with AI

How to use this template

1

Copy the template

Click the copy button to grab the full prompt text.

2

Fill in the placeholders

Replace anything in [BRACKETS] with your specific details.

3

Paste into any AI tool

Works with ChatGPT, Claude, Gemini, Cursor, and more.

4

Or enhance with AI

Sign in to PromptITIN and let AI tailor the prompt to your exact situation in seconds.

Why this prompt works

The weighting system in the scoring table forces the evaluator to commit to what actually matters before scoring, preventing post-hoc rationalisation of a preferred option. The 'what we don't know' section is the most practically useful addition — technology decisions made on incomplete information without acknowledging the gaps are the ones that produce the worst surprises post-implementation.

Tips for best results

  • Run this evaluation with two separate team members and compare their scores independently before discussing — discrepancies reveal where your criteria need more definition or where you have conflicting priorities
  • Always include a 'do nothing / status quo' option in the comparison — it forces the evaluation to justify the cost and disruption of change, not just compare alternatives
  • Vendor demos are marketing exercises — request a sandbox environment or a technical proof of concept for your specific use case before scoring functionality
  • The integration complexity score is the most consistently underestimated — ask your engineering team to estimate integration hours before accepting a vendor's 'quick and easy' assurances

More Research templates

Summarise a Paper

Get a structured academic paper summary covering thesis, key findings, methodology, limitations, and practical implications — written for non-experts.

View →

Competitive Analysis

Analyse up to 3 competitors across pricing, features, target market, strengths, and weaknesses — with 3 strategic opportunities for your business.

View →

Market Research Brief

Understand any market with size estimates, 3–5 key trends, customer segments, main competitors, barriers to entry, and a 1-year outlook.

View →
← Browse all 195 templates