ME6208 - Group Project
Objective:
The purpose of this project is to apply AI tools, frameworks, and methodologies to build a functional prototype that demonstrates the potential of AI in different domains. Students will work in groups to develop, document, and present their project using a structured approach similar to a technical whitepaper.
Project Deliverables:
Each group must submit the following:
1. Written Report (3,000–4,000 words, excluding references and appendices)
o A structured document explaining the background, methodology, implementation, evaluation, and impact of the project.
2. Code/Workflow (if applicable)
o A well-documented GitHub repository containing the code, scripts, or workflow definitions (e.g., JSON files for automation).
o If building upon an existing repository, clearly document:
§ What modifications or improvements were made.
§ Implementation details and key learnings.
§ Challenges encountered and how they were addressed.
§ Important considerations for future development.
3. Demo Recording (5–10 minutes)
o A recorded walkthrough of the developed project, demonstrating key functionalities and outcomes.
Project Report Structure
1. Introduction
· Background: Explain the context of the problem and why it is important in the field of AI.
· Motivation: Why did you choose this specific topic? What gap or need does it address?
· Research & Industry Relevance: How does this project relate to current AI trends, business applications, or social impact?
2. Project Objectives & Contribution
· Clearly state what the project aims to achieve.
· Highlight the unique aspects of your work (e.g., improvement over existing methods, novel implementation).
· Define measurable success criteria.
3. Implementation & Development
· Technical Stack: List the tools, frameworks, and models used (e.g., OpenManus, OpenAI Agent SDK, Whisper, LangChain).
· Development Process: Explain how you built the project step by step.
· Challenges & Solutions: Discuss any roadblocks encountered and how they were resolved.
4. Evaluation & Results
· If applicable, compare different methods/frameworks used in your project.
· Provide qualitative/quantitative evaluation metrics.
· Include performance benchmarks or user feedback if relevant.
5. MVP Demo & Future Work
· Provide a link to your recorded demo.
· Discuss potential improvements or future directions for your project.
6. References & Appendices
· Cite all external sources, datasets, and frameworks.
· Include any supplementary materials such as additional figures, detailed logs, or technical diagrams.
Project Topics
Topic 1: Evaluating Open AI Agent Frameworks
· Goal: Compare different AI agent frameworks for task automation and performance.
· Tasks:
o Deploy and test OpenManus (https://github.com/mannaandpoem/OpenManus).
o Explore alternative open-source frameworks, such as the OpenAI Agent SDK (https://openai.github.io/openai-agents-python/), to develop similar AI agents.
o Implement and evaluate different tasks using these agent frameworks.
o Define an evaluation framework (e.g., accuracy, latency, usability) to compare results.
Topic 2: AI for Social, Business, or Personal Impact
· Goal: Leverage AI-generated content (AIGC) tools to create something meaningful.
· Tasks:
o Use existing AI models (open-source or proprietary) to build a product with real-world value.
o Examples:
§ Use Whisper or similar models to record and archive conversations with family members (like the Bao Xiaobo project).
§ Develop an AI influencer capable of generating social media content and engaging with users.
o Deliverables must include a working demo and an explanation of the development process.
Topic 3: AI-Powered Automation for Productivity
· Goal: Automate repetitive tasks using AI-driven workflows.
· Tasks:
o Identify a task that can be automated, such as research literature reviews, AI-generated blogs (text-Twitter, images-Xiaohongshu, podcasts), assignment grading, etc.
o Develop a pipeline/workflow to fully automate the process.
o Provide an evaluation of time saved, performance accuracy, and practical usability.
Topic 4: Experimenting with AI Agent Societies & Multi-Agent Systems
· Goal: Understand, experiment with, and extend AI agent societies or frameworks.
· Tasks:
o Start by exploring one of the following frameworks:
§ AI Agent Society: https://github.com/tsinghua-fib-lab/agentsociety
§ Archon: https://github.com/coleam00/Archon
o First, run the framework and conduct an experiment to analyze how the system works.
o Then, extend the framework by:
§ Running a specific experiment to study multi-agent interactions.
§ Developing a custom AI agent or functionality using the framework.
o Document key insights, challenges, and potential applications.
Evaluation Criteria (100 points total)
1. Problem Definition & Background (15 points)
o Clear explanation of the project scope, relevance, and research/industry context.
o Justification of why this problem is important and what gap it addresses.
2. Technical Implementation (20 points)
o Effective application of AI models, frameworks, and tools.
o Thoughtful modifications or integrations with existing solutions.
o Code quality, structure, and documentation.
3. Innovation & Contribution (20 points)
o Novelty of the approach or improvement over existing solutions.
o Creativity in implementation and real-world applicability.
4. Evaluation & Analysis (15 points)
o Well-defined evaluation framework (e.g., benchmarks, comparisons, user testing).
o Critical analysis of results and insightful conclusions.
5. Demo & Documentation (15 points)
o Clarity and effectiveness of the recorded demo, showcasing key functionalities.
o Completeness and readability of the written report and GitHub documentation.
6. Presentation & Communication (15 points)
o Organization, clarity, and persuasiveness of the final presentation.
o Ability to explain technical and conceptual aspects to an audience.
o Engagement during Q&A, demonstrating deep understanding.