
Artificial Intelligence has become an integral part of modern software engineering education, transforming how students learn and apply concepts. In ICS 314, AI tools like GitHub Copilot, ChatGPT, and others have bridged gaps in understanding, accelerated coding, and introduced new ways of problem-solving. Throughout the semester, I leveraged AI extensively—primarily Copilot for in-editor assistance and various GPT models for broader tasks—to enhance my workflow. This essay reflects on that journey, analyzing how AI shaped my experiences and insights.
AI’s role in education is evolving rapidly, especially in technical fields like software engineering where tools can simulate expert guidance, generate code snippets, and explain complex concepts. In ICS 314, AI wasn’t just a novelty; it was a productivity multiplier. I used GitHub Copilot for real-time code suggestions, ChatGPT (including GPT-4 and GPT-5 variants) for planning and debugging, Claude Sonnet 3.5 for detailed explanations, and even experimented with Haiku and GPT Codex for specialized tasks. These tools helped me tackle assignments efficiently, but they also required learning prompt engineering to get reliable results.
I used AI for nearly every aspect of ICS 314, treating it as a collaborative partner rather than a crutch. It allowed me to focus on high-level design while delegating repetitive coding. Here’s how it played out for each course element:
Experience WODs e.g. E18: For functional programming WODs, I’d paste the instructions into Copilot or ChatGPT, adding specifics like “Use Underscore.js and ensure it handles edge cases like empty arrays.” This was extremely useful—AI provided a starting point, saving hours. Benefits: Speed and accuracy for boilerplate. Costs: Sometimes required tweaks for exact requirements, but overall, it made WODs manageable without frustration.
In-class Practice WODs: Similar to Experience WODs, I’d use Copilot in VS Code for live suggestions during practice. Prompt: “Implement this practice WOD using React components.” Useful for quick iterations; benefits outweighed costs as it reinforced concepts through hands-on application.
In-class WODs: Copilot was my go-to, with prompts like “Complete this in-class WOD: [instructions], ensuring it matches the professor’s notes.” Highly useful for time pressure; it helped me finish on time, though I’d double-check outputs to avoid subtle bugs.
Essays: For assignments like this one, I’d outline in ChatGPT with prompts like “Write an essay on AI use in ICS 314 based on these points: [my notes].” Extremely useful for structuring thoughts; benefits: Organized content quickly. Costs: Required editing for personal voice, but it evolved my prompt engineering skills.
Final project: Copilot generated much of Tithr’s code—prompts like “Create a Next.js component for budget allocation with these features.” Useful for rapid prototyping; benefits: Efficiency in building features. Costs: Visual tweaks were hard, leading to manual adjustments.
Learning a concept / tutorial: I’d ask GPT-5: “Explain functional programming in JavaScript with examples.” Useful for quick overviews; benefits: Accessible explanations. Costs: Sometimes oversimplified, so I’d supplement with readings.
Answering a question in class or in Discord: I rarely participated here, so I didn’t use AI. If I had, I’d use GPT-5 for drafting responses, but I preferred forming answers myself to ensure authenticity.
Asking or answering a smart-question: For smart-questions, I’d use AI to refine queries: “Rephrase this question to be more specific.” Useful for clarity; benefits: Better engagement. Costs: Minimal, as it was quick.
Coding example e.g. “give an example of using Underscore .pluck”: Copilot or ChatGPT: “Example of Underscore .pluck on an array of objects.” Extremely useful; benefits: Instant examples. Costs: None significant.
Explaining code: After AI-generated code, I’d prompt: “Add detailed comments explaining this function.” Useful for documentation; benefits: Clearer code. Costs: Time to review comments.
Writing code: Copilot handled most: “Write a function to validate email in JavaScript.” Useful for productivity; benefits: Faster development. Costs: Debugging unexpected behaviors.
Documenting code: AI added comments: “Document this React component.” Useful for maintainability; benefits: Consistent docs. Costs: Occasionally verbose.
Quality assurance e.g. “What’s wrong with this code”: GPT Codex: “Fix ESLint errors in this code snippet.” Very useful; benefits: Quick fixes. Costs: Sometimes missed context-specific issues.
Other uses in ICS 314 not listed: For GitHub issues or project planning, I’d use AI to brainstorm: “Outline steps for implementing user auth.” Useful for organization; benefits: Structured planning.
Overall, AI was a game-changer for efficiency, but it challenged me when outputs deviated from expectations, teaching persistence.
AI enhanced my practical skills immensely, especially in prompt engineering and tool mastery, but it somewhat reduced my deep dive into core concepts. I didn’t feel the need to memorize syntax or algorithms because AI provided them on demand. This challenged my understanding—why learn something AI can generate? Yet, it boosted skill development by emphasizing problem-solving over rote memorization. AI helped me learn resilience: when code failed, I’d iterate prompts or debug manually, reinforcing debugging skills.
Outside ICS 314, I built a personal financial planner website using AI—prompting GPT-5: “Create a full-stack app for budgeting.” It was very effective for generating code and solving real-world problems like data validation. However, visual tweaks were tough, requiring manual CSS adjustments. This mirrored Tithr’s development, where AI excelled at logic but struggled with UI polish.
Challenges included prompt limits (e.g., on free tiers), AI “loops” where it repeated errors, and unexpected outputs wasting time. Opportunities lie in better integration: dedicated prompt engineering modules could teach students to harness AI effectively, making education more adaptive.
Traditional methods rely on lectures and manuals, fostering deep knowledge but slower progress. AI-enhanced approaches accelerate learning through interactive, on-demand assistance, improving engagement and retention via hands-on application. However, they risk superficial understanding if not balanced with critical thinking.
AI will dominate software engineering education, with advancements like more context-aware models reducing errors. Challenges include over-reliance; improvements could include AI ethics training. Educators should embrace AI, designing assignments that leverage it for creativity rather than banning it.
AI transformed my ICS 314 experience, making me efficient and skilled in modern tools. While it sometimes hindered deep learning, the benefits—speed, accessibility, and innovation—outweigh the costs. I recommend integrating AI training into courses, encouraging students to view it as a partner, not a shortcut. As AI evolves, so should our approach to education.
Small print on AI usage: I used ChatGPT (GPT-5) to generate and structure this essay based on my provided points and the required outline. I reviewed and edited the content to ensure it reflects my personal experiences and voice.