We live in a post-AI world. That is, AI itself is integrated into many facets of our workflows. It’s become part of website chatbots, used for search algorithms, and has seen increased use in writing code. Even in education, AI is used by students to teach concepts, complete homework, or write essays. There’s a multitude of AI models out there, most notably ChatGPT, Google Gemini, Microsoft Copilot, and ClaudeAI. However, with these available, many classes still disallow the usage of these tools, and even go as far as using “AI checkers” that are dubious in their claims of being able to correctly identify AI content. Throughout this software engineering course, I first made use of Copilot, but later changed my AI tool of choice over to Claude.
To get used to working with different types of code, including Next.js, HTML, CSS, bootstrap5, and TypeScript, the class had these “experience WODs,” which were assignments that had us working on functions, web pages, or other aspects of coding and development to familiarize ourselves more with the content we were learning. In these, we also had to complete the task within a certain set of time. However, these assignments were entirely self-study, so we were allowed to reattempt these assignments as many times as we wanted until we felt we finished within a certain amount of time. I didn’t particularly use AI in this since it was extremely low stakes, and I didn’t benefit from using it when I could always reattempt the WOD if I failed.
In addition to experience WODs, there were also in-class practice WODs, which were, as the name suggests, done in class. These were similar to experience WODs, but came without the ability to redo the WOD, and had to be completed within the class period. This also meant that whatever you turned in would be graded. Fortunately, these weren’t weighted too heavily, so they were a semi-low-stakes experience, focusing more on the timing aspect over completion and correctness. However, for these particular WODs, I used AI to help me with the more time-consuming tasks like generating test cases and writing code snippets from the prior framework I had. This would be done by asking a prompt like, “generate a code snippet that does (task)” or “create edge test cases for the following algorithm.” I also used AI for error correction, making sure that my final product had no major errors.
The practice WODs were a step below the in-class WOD, which meant that they held more weight in the gradebook and were also tested more rigorously than the practice WODs. These were higher stakes as we were unable to redo the WOD, unlike to the practice and experience WODs. To account for this, I used AI to check errors to save time, and would also have AI create the framework for other parts of the WOD. For example, in the UI design page, which required multiple pages to be interconnected, I utilized Copilot to help with generating a framework that would allow for page navigation. This came at a time when we were learning how to use HTML and TypeScript, and I prompted AI to “generate a code snippet that will connect the current page with (page in question)”. Overall, I feel like I relied a little too heavily on AI in these in-class WODs, as I don’t feel I mastered the content as well as I should have.
There were several essays throughout this semester that required me to write about various aspects of software engineering and my experiences with them in this course. However, I never used AI to fully write my essays, as I felt that it defeated the point of having an essay portion anyway. Instead, I used it for grammar checking and outlining, with the infrequent use of using generative AI for brainstorming ideas.
I utilized my Claude AI a decent amount throughout the final project. Especially with lots of code being written by my group of 6 programmers, there was a lot of code being written and many changes that needed to be continuously monitored and checked. Especially in connecting pages to a Vercel database, the API routes utilized a lot of Claude’s AI expertise to ensure that everything on the backend functioned properly. I found myself constantly prompting my Claude agent for fixes to API routes and their connected components to validate and rewrite these routes in a cleaner and more scalable way. For example, a prompt I used for a messaging component was “verify and fix the route for (component or page) as it’s throwing an internal server error (x).” However, the AI had no awareness about ESLint errors, even when prompted, so a lot of the code that it produced was manually checked for errors. Overall, the use of AI in this project helped expedite my code and allowed me to help put out a product within the short time we had to complete the project.
I didn’t really use AI to learn concepts, as I didn’t feel a need for it. A lot of the resources needed to learn concepts were covered throughout the class materials, leaving very little “learning” to be done from any AI I used.
I used AI more in class for answering questions, specifically for some of the comparison questions of different concepts learned throughout the class. That said, due to the time it took to prompt an AI and find a suitable answer, someone usually ended up satisfactorily answering the question.
Just like the above point, I used AI to answer questions in class that I found particularly difficult. However, I found that the use of AI to ask smart questions wasn’t very useful to ask other students, but I did find that I got better at asking smart questions to AI, which helped me develop better AI prompting skills.
Before the final project, I didn’t particularly use AI at all to do any kind of coding example or mockup. This was because a lot of the code I had AI help me with was the WODs, where I had live feedback from the terminal on whether or not something worked. However, in the final project, I found myself needing to use the AI to create boilerplate, mockups, or even create the framework for more components. For example, I used my AI to generate frameworks for a lot of my comparison components, as I wasn’t sure where to even start for them- I only knew what I wanted them to do.
I used AI to explain the code a lot more in the final project. As I relied more heavily on it, I wanted to still try to learn what went wrong in my attempt and how the AI fixed it. Claude was very helpful in documenting inline comments in addition to providing READ-MEs on prompt to explain the fixes and steps taken in the generation.
I utilized AI a lot for writing code; however, they were at very different capacities between the WODs and the final project. As stated before, the AI was used in the WODs primarily for snippets and minor tweaks, with the occasional framework prompt. However, in the final project, AI took a much larger role, creating more complex frameworks, debugging and fixing my code, in addition to the usual code snippet generation.
As I used AI a lot in the final project, I had the AI also handle the documentation of the code it wrote. While there was inline code, a larger portion of it was covered in a separate prompt, which contained a READ-ME of all the fixes done at the end of generation.
I found the AI useful and unhelpful in the case of quality assurance. It helped identify and fix conflicting code, bad syntax, and even merge conflicts within the group’s final project. However, I found that the AI was generally unhelpful in fixing ESLint errors, as it often generated code in its fixes that was not ESLint-approved, even if I prompted it to try to follow ESLint rules. To that end, I ended up adding lines of code that ignored the ESLint errors caused by the code.
After using AI throughout this entire semester, I think overall it was beneficial. Despite the fact that I feel that I didn’t learn as much as I had hoped, that’s a problem with me for getting too used to the AI and relying on it too much. I think had I used it more as a tool throughout the WODs, I would’ve been in a much more stable place with my comprehension of the material. In other words, while I understand and can implement the concepts learned throughout this semester, I’m a lot weaker in the writing and implementation of the code unless I have an AI assistant. I found that I developed a better sense of problem-solving, as I learned to prompt smarter and use my tools more effectively.
Outside of the software engineering class, I found that I used AI for a variety of things. For me, I found that using generative AI works well for outlines and grammar checking, especially on papers where I’m not sure how to properly organize my thoughts. I used AI for one of my papers in a different class that required me to compile and write a report on multiple source materials. While I knew what I wanted to talk about, I used genAI to help me create an outline that interlinked all the concepts together and made the essay flow smoothly. In addition, while I didn’t take part in these, I heard that the HACC (Hawaii Annual Coding Challenge) prompts were focused on integrating websites with some form of AI. In my opinion, AI is a great tool for this, as it can improve the efficiency and work capacity of software engineers who can be prompted smartly.
One of the biggest problems I found in AI usage throughout this semester is how easy it is to fall into the pitfall of overreliance. By throwing assignment prompts into AI, it can easily give a functional answer, but that doesn’t teach anything to the student. I fell slightly into this trap further into the semester, and it reached a point where I gave up for a bit because I just didn’t know why the code didn’t work, because I didn’t have any understanding of what I was doing. AI writes a lot of code, and students usually accept it as good. However, a student needs to know the fundamentals so that when the code isn’t good, they can recognize and either manually fix or reprompt the AI to regenerate the code in a way that works. I also found that AI is what I would call a “yes-man” in that if you tell it that it’s wrong and that the code doesn’t work for some reason, it will agree with you, and make a whole bunch of code that may or may not fix the problem. The code written may not follow the standards of ESLint, which could further exacerbate the problem.
No matter what instructors and teachers say, AI is not going to go anywhere anytime soon. It’s becoming more ingrained in the workforce and in education, so now is the time to determine how to utilize AI best, much like how students utilize calculators or other tools. Traditionally, a lot of testing in computer science classes is done with handwritten code, which just isn’t very viable anymore. AI is great for writing code, and for long chunks, it’ll do it faster than humans can. However, AI can’t teach the fundamentals, nor can it easily teach why it’s coding in a certain way, which is where instructors come in. Without the basics, students won’t know what to do with bad code, or even know where or how to start a prompt with their AI of choice. However, for things like retention of knowledge and engagement, the use of AI can enhance the experience in the classroom. While it can’t teach, AI is great at simplifying information and making it easier to understand. By personally tailoring the AI experience, one can soak up material that may have taken multiple lectures or slideshows, which would end up saving time and allowing for more time to practice the content. Both forms of teaching have their own merits where it excels and where it underperforms, so the best way would be to hybridly integrate the two.
Currently, software engineering and AI agents are at very large odds. Software engineers and programmers are finding themselves slowly replaced in the workforce as AI is being used to mindlessly generate code without considering the core principles of software engineering. Concepts taught, such as use cases and design patterns, are ignored in exchange for the increased amount of code AI can produce. In order to fix this and improve the quality of education and code, there needs to be a healthy mix between the two. The fundamental concepts of software engineering should be taught. Concepts such as learning how to properly design a use case, setting goals, and recognizing and implementing design patterns need to be taught. Then, once the basics are covered, students can learn how to implement those fundamentals with code. It’s important to have them do this without the use of AI, as relying on it immediately ruins the entire point of the exercise. The tasks need to be challenging, but not written in a way that would require an AI’s help. This difficulty can gradually increase as students are then given tasks that would benefit from using an AI. This will help get students more accustomed and familiar with the concepts to the point where they could responsibly ask AI for help by smartly prompting an AI for help, giving more specific requirements, and as a result, getting better code. Combined with their knowledge of the basics, the student can then also recognize bad code more easily and realize that AI gets things wrong, too.
Overall, I think the use of AI in this course was generally useful, although it was also very easy to fall into an overreliance on it. AI was extremely helpful for writing code, including test cases, boilerplates, and documentation. It was also very easy to fall into a hole of using it for everything. When prompted in a smart way, it was a boon to the assignment, while it was also a curse to those who couldn’t recognize faults in the code that the AI created, rendering code and assignments entirely useless. It’s incredibly important to learn problem-solving and the basics before throwing everything to AI. AI is a wonderful tool and resource that’s been developed, but they aren’t thinking for themselves, and are more of a reflection of the prompter. Especially now, with AI being in such a contentious position with both proponents and opponents in education and the workplace, it’s very important to learn how to use it as a very helpful tool, rather than a crutch that hinders one’s learning and performance in the long run.
AI disclosure: AI was used for grammar checking and mild rewording.