Most AI literacy frameworks are either too vague ("understand AI's impact on society") or too technical ("learn gradient descent"). Neither helps someone actually work with AI tools effectively. Rick Dakan and Joseph Feller built something different—a framework that recognizes AI as a thinking partner while acknowledging that this only works if humans develop specific competencies.

Their Framework for AI Fluency, developed through courses at Ringling College of Art and Design and Cork University Business School, identifies what people actually need to know to use AI well. Not theory. Not ethics lectures disconnected from practice. Practical competencies that determine whether AI makes your work better or just creates more cleanup.

Core Definition: AI Fluency is the ability to work effectively, efficiently, ethically, and safely within emerging modalities of human-AI interaction. It's not about understanding how AI works—it's about understanding how to work with AI.

Why Most AI Frameworks Miss the Point

Education and industry are flooding the market with AI frameworks. Most follow predictable patterns: start with AI history, explain technical concepts nobody needs, add ethics as an afterthought, conclude with vague guidance about "responsible use."

These frameworks treat AI as something to understand rather than something to use. They optimize for comprehensive coverage instead of practical competence. Students learn about neural networks but can't evaluate whether an AI output is actually useful. Employees complete training modules but still don't know when to use AI versus when to do the work themselves.

The Dakan-Feller framework inverts this approach. It starts with actual human-AI interaction patterns, identifies the competencies those patterns require, and builds a structure that works across tools, platforms, and use cases. Platform-agnostic, context-flexible, ethics-centered by design rather than addendum.

Three Modalities of Interaction

Before defining competencies, the framework maps how humans actually interact with AI. Three distinct modalities emerge from observing current practice.

Modality 1: Automation

AI performs human-defined tasks independently. You give clear instructions, the model executes, you use the output. This is the efficiency play—offloading repetitive, time-consuming, or data-intensive work.

Examples: Email drafting, content summaries, basic code generation, social media posts, data formatting.

Key Requirement: Clear task definition and quality control. You need to know exactly what you want and verify you got it.

Modality 2: Augmentation

AI and human co-define and co-execute tasks iteratively. This isn't you telling AI what to do—it's collaborative problem-solving where both parties contribute to shaping the solution.

Examples: Complex writing projects, research papers, sophisticated coding tasks, creative work with multiple constraints.

Key Requirement: Dynamic interplay between human judgment and AI capability. You're building something together, not delegating a task.

Modality 3: Agency

You configure AI to perform future tasks independently, including for other users. You're not defining a specific output—you're defining how the AI should behave across situations you won't directly control.

Examples: Interactive game characters, tutoring systems, customer service chatbots, personalized learning agents.

Key Requirement: Sophisticated understanding of AI capabilities and limitations. You're creating something that will make decisions without you present.

Reality Check: Real work doesn't stay in one modality. You might automate data processing, augment analysis, and create an agent to present findings. Fluency means knowing which modality fits which part of your workflow.

The Four Ds

The framework identifies four interconnected competencies that enable effective work across all modalities. They're called the "Four Ds"—memorable, but more importantly, comprehensive without being overwhelming.

1. Delegation - Choosing When and How to Use AI

Delegation is knowing if, when, and how to involve AI. Not every task benefits from AI. Not every AI tool fits every task. Delegation means understanding your goals, the available platforms, and how to match them.

Three Sub-Competencies:

Goal and Task Awareness: Can you deconstruct a project into AI-appropriate, human-appropriate, and collaborative components? Can you envision an effective goal and understand what achieving it requires?

Platform Awareness: Do you know what current AI tools can and can't do? Can you evaluate platforms based on your project's specific requirements, budget, and constraints?

Task Delegation: Can you balance AI and human capabilities throughout a project? Do you understand when automation works versus when augmentation or agency makes sense?

This isn't about maximizing AI use. It's about optimal use—knowing when AI helps and when it doesn't.

2. Description - Communicating Effectively With AI

Description covers prompting, but goes deeper. It's about translating creative vision into AI-understandable terms, structuring iterative collaboration, and defining future behaviors for agent-based systems.

Three Sub-Competencies:

Product Description: Can you clearly articulate desired output characteristics? Can you translate "I want something good" into specific, actionable guidance that produces something good?

Process Description: Can you engage in dynamic, multi-turn dialogue with AI? Can you break complex tasks into manageable prompts that build toward a goal?

Performance Description: Can you define how AI should behave in future interactions you won't control? Can you anticipate user needs and translate them into behavioral guidelines?

Poor description creates garbage outputs that waste time. Good description creates starting points that accelerate work.

3. Discernment - Evaluating AI Outputs and Processes

Discernment is critical evaluation—assessing quality, identifying biases, recognizing when AI helps versus when it derails work. It applies to outputs (is this good?), processes (is this collaboration working?), and performance (does this agent behave appropriately?).

Three Sub-Competencies:

Product Discernment: Can you critically assess AI-generated content? Can you identify strengths, weaknesses, and specific improvements needed?

Process Discernment: Can you evaluate whether human-AI collaboration is productive? Can you identify which parts work and which create friction?

Performance Discernment: Can you assess whether AI-driven behaviors create positive user experiences? Can you gather and interpret feedback to refine agent behavior?

Without discernment, you accept mediocre outputs, persist with broken processes, and deploy agents that frustrate users. Discernment is the difference between AI-assisted work and AI-degraded work.

Critical Insight: The first three Ds are about capability. The fourth is about responsibility. You can be great at delegation, description, and discernment while still using AI irresponsibly. Diligence closes that gap.

4. Diligence - Taking Responsibility for AI-Assisted Work

Diligence means ethical use, transparency about AI involvement, and accountability for final products. It recognizes that "the AI did it" isn't an excuse—you chose to use AI, you configured it, you released the output.

Three Sub-Competencies:

Creation Diligence: Are you using AI ethically throughout the process? Are you aware of biases, stakeholder impacts, and potential harms? Are you mitigating risks as you work?

Transparency Diligence: Are you honest about AI's role in creating the work? Do you understand audience, industry, and legal expectations around AI-generated content?

Deployment Diligence: Are you verifying and vouching for outputs before release? Are you fact-checking, testing accuracy, validating claims? Are you implementing safety checks for agent-based systems?

Diligence isn't about adding ethics lectures to training. It's about building ethical practice into the competencies themselves—recognizing that responsible AI use requires specific skills, not just good intentions.

Why This Framework Works

The Dakan-Feller framework succeeds where others fail because it optimizes for practical competence rather than comprehensive coverage.

Platform Agnostic: Works regardless of which AI tools you use. When GPT-5 or Claude 4 or Gemini Ultra ships, the competencies remain relevant. Tools change, but delegation, description, discernment, and diligence don't.

Contextual and Flexible: Describes effective action rather than prescribing rigid processes. Compatible with other skills taxonomies. Adapts to different professional contexts without requiring complete restructuring.

Ethics Centered: Treats ethics as fundamental competency, not optional module. Recognizes that responsible AI use requires specific skills, not just awareness. Makes diligence equal weight with technical competencies.

Grounded in Practice: Built from actual course delivery and faculty workshops, not theoretical speculation. Reflects observed patterns of successful human-AI interaction, not idealized workflows.

Actionable at Multiple Levels: Works for curriculum design, assessment frameworks, professional development, and self-directed learning. Scales from individual skill development to organizational capability building.

Most importantly, the framework acknowledges what AI actually is in practice—not artificial general intelligence, not a reasoning engine, but a thinking partner whose effectiveness depends entirely on human competence in working with it.

Framework Principle: AI's potential as a thinking partner can only be realized through development and performance of specific human competencies. The tool doesn't make you fluent—the competencies do.

As AI capabilities advance, the competencies required to use them effectively will matter more, not less. Organizations that treat AI literacy as "learn to prompt" will fall behind organizations that develop genuine fluency across delegation, description, discernment, and diligence.

The Framework for AI Fluency provides a practical structure for building that fluency—in education, in professional development, in organizational capability building. It's not the only possible framework, but it's one that actually works because it focuses on what matters: human competencies that enable effective human-AI collaboration.


Framework Attribution

The Framework for AI Fluency was developed by:

  • Rick Dakan, Professor of Creative Writing and AI Coordinator, Ringling College of Art and Design (rdakan@c.ringling.edu)
  • Joseph Feller, Professor of Information Systems and Digital Transformation, Cork University Business School, University College Cork, Ireland (jfeller@ucc.ie)

Framework Version 1.1 (January 13, 2025)
License: CC BY-NC-ND 4.0
Full framework available at: https://ringling.libguides.com/ai/framework

This article interprets and analyzes the Framework for AI Fluency. All interpretations and commentary are the author's own. For the authoritative framework documentation, refer to the original source materials.