Changing Myself: Writing Unit Checks with ChatGPT

The science-fiction future is amongst us as we discover ourselves on the precipice of an AI revolution. As with all technological milestones, from the wheel to the pc, what we are able to do and how briskly we are able to do it shift in direction of ever-increasing productiveness. Whereas the philosophers and keyboard warriors amongst us talk about the philosophical ramifications of AI, right here at Large Nerd Ranch, we concern ourselves with extra sensible issues.
On this collection, I plan on investigating ChatGPT’s reply to one of many oldest questions in human historical past: How can I make my life simpler?
Recognizing {that a} strong suite of unit exams is crucial for guaranteeing a wholesome codebase isn’t rocket surgical procedure. However let’s face it – writing exams might be tedious at occasions. In case you’re something like me, you’ve in all probability caught your self daydreaming about having a trusty assistant to both handle all of it for you or, on the very least, lend a serving to hand. We’re going to place ChatGPT by some exams at the moment, beginning with one thing straightforward to calibrate ourselves and transferring on to testing a React reducer and React Element.
Let’s stroll earlier than we run, lets? We’ll ask it to write down some unit exams for a recursive perform to see if it will get tripped up. The best recursive perform I can consider is the factorial, one thing we must always have all seen in CS101:
perform factorial(n) {
if (n === 0 || n === 1) {
return 1;
} else {
return n * factorial(n - 1);
}
}
Easy sufficient, proper? Now we’ll ask for some unit exams:
Immediate: Are you able to write me some unit exams for the next javascript perform:
take a look at("factorial of 0 is 1", () => {
anticipate(factorial(0)).toBe(1)
})
take a look at("factorial of 1 is 1", () => {
anticipate(factorial(1)).toBe(1)
})
take a look at("factorial of 5 is 120", () => {
anticipate(factorial(5)).toBe(120)
})
take a look at("factorial of unfavorable quantity throws an error", () => {
anticipate(() => factorial(-1)).toThrow()
})
Spectacular, ChatGPT not solely accounted for the particular case of 0 factorial but additionally thought of the potential for unfavorable integers and actual numbers as enter.
That gave us an honest thought as to the way it will deal with writing exams for a easy recursive perform, now that we’re calibrated, let’s swap gears to a extra sensible instance.
The next code snippet is a React reducer for a web-based buying cart:
const cartReducer = (state, motion) => { swap (motion.kind) { case "ADD_ITEM": return { ...state, objects: [...state.items, action.payload], whole: state.whole + motion.payload.value, }; case "REMOVE_ITEM": const itemToRemove = state.objects.discover( (merchandise) => merchandise.id === motion.payload.id ); return { ...state, objects: state.objects.filter((merchandise) => merchandise.id !== motion.payload.id), whole: state.whole - itemToRemove.value, }; case "CLEAR_CART": return { ...state, objects: [], whole: 0, }; default: throw new Error(`Invalid motion kind ${motion.kind}`); } };
We are able to determine a number of exams that must be written by skimming by it. At a minimal, we’d need exams for including objects to the cart, eradicating them, and emptying the cart. Earlier than we begin writing something ourselves, let’s flip to our trusty assistant, ChatGPT, and see if it agrees with us.
Immediate: Are you able to write me some unit exams for the next react reducer:
That is what ChatGPT generated:
describe("cartReducer", () => { let initialState; beforeEach(() => { initialState = { objects: [], whole: 0 }; }); it("ought to deal with ADD_ITEM motion", () => { const motion = { kind: "ADD_ITEM", payload: { id: 1, title: "Merchandise 1", value: 10 } }; const newState = cartReducer(initialState, motion); anticipate(newState.objects.size).toEqual(1); anticipate(newState.objects[0]).toEqual(motion.payload); anticipate(newState.whole).toEqual(motion.payload.value); }); it("ought to deal with REMOVE_ITEM motion", () => { const itemToRemove = { id: 1, title: "Merchandise 1", value: 10 }; const motion = { kind: "REMOVE_ITEM", payload: itemToRemove }; const state = { objects: [itemToRemove], whole: itemToRemove.value }; const newState = cartReducer(state, motion); anticipate(newState.objects.size).toEqual(0); anticipate(newState.whole).toEqual(0); }); it("ought to deal with CLEAR_CART motion", () => { const state = { objects: [{ id: 1, name: "Item 1", price: 10 }], whole: 10 }; const motion = { kind: "CLEAR_CART" }; const newState = cartReducer(state, motion); anticipate(newState.objects.size).toEqual(0); anticipate(newState.whole).toEqual(0); }); it("ought to throw an error for invalid motion sorts", () => { const motion = { kind: "INVALID_ACTION" }; anticipate(() => cartReducer(initialState, motion)).toThrowError( "Invalid motion kind INVALID_ACTION" ); }); it("ought to throw an error for invalid motion sorts", () => { anticipate(() => cartReducer(initialState)).toThrowError( "Invalid motion kind INVALID_ACTION" ); }); });
Not solely did it whip up some exams for including, eradicating, and emptying the cart, nevertheless it additionally included exams for dealing with empty and invalid actions. Now all that’s left is to run these exams. Nonetheless, remember that since ChatGPT doesn’t have any information of our particular file construction, we’d have to make some minor changes with a purpose to get every thing working easily. Small value to pay I say.
Let’s run these exams.
Appears like every thing handed apart from one take a look at. Trying by the cartReducer code we are able to see why the take a look at failed: The code we initially wrote doesn’t appear to be constructed to deal with the case the place an empty motion has been handed to it and because of this we’re getting a null reference exception after we try and throw the invalid motion error. Whereas it may be thought of a mistake on ChatGPTs half to create that unit take a look at, it did alert us to a possible unhandled exception in our cartReducer code.
Let’s feed it a React element and see the way it does. Right here now we have an Alert react element in Typescript.
import React from "react"; import PropTypes from "prop-types"; const Banner = ({ title, subtitle, picture, hidden }) => { return ( <div className="banner"> <img src={picture} alt={title} hidden={hidden} /> <div className="banner__content"> <h1 className="banner__title">{title}</h1> <p className="banner__subtitle">{subtitle}</p> </div> </div> ); }; Banner.propTypes = { title: PropTypes.string.isRequired, subtitle: PropTypes.string.isRequired, picture: PropTypes.string.isRequired, hidden: PropTypes.bool.isRequired, }; export default Banner;
Let’s see how ChatGPT handles it.
Immediate: Are you able to write me some unit exams for the next React element:
import React from "react"; import { render, display } from "@testing-library/react"; import Banner from "./Banner"; describe("Banner element", () => { it("renders the title", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={false} /> ); const titleElement = display.getByText("Check Title"); anticipate(titleElement).toBeInTheDocument(); }); it("renders the subtitle", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={false} /> ); const subtitleElement = display.getByText("Check Subtitle"); anticipate(subtitleElement).toBeInTheDocument(); }); it("renders the picture with the right alt textual content", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={false} /> ); const imageElement = display.getByAltText("Check Title"); anticipate(imageElement).toBeInTheDocument(); anticipate(imageElement).toHaveAttribute("src", "take a look at.png"); }); it("hides the picture when hidden prop is true", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={true} /> ); const imageElement = display.getByAltText("Check Title"); anticipate(imageElement).not.toBeInTheDocument(); }); });
We’ll go forward and run these exams.
A single take a look at failed. ChatGPT assumed that toBeInDocument would return false when a component is hidden. toBeInDocument checks for whether or not the aspect is within the tree, no matter visibility, so on this case, it’s truly returning true. ChatGPT ought to have used toBeVisible as an alternative. Let’s go forward and make that change and see if we are able to get that take a look at to go.
import React from "react"; import { render, display } from "@testing-library/react"; import Banner from "./Alert"; describe("Banner element", () => { it("renders the title", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={false} /> ); const titleElement = display.getByText("Check Title"); anticipate(titleElement).toBeInTheDocument(); }); it("renders the subtitle", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={false} /> ); const subtitleElement = display.getByText("Check Subtitle"); anticipate(subtitleElement).toBeInTheDocument(); }); it("renders the picture with the right alt textual content", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={false} /> ); const imageElement = display.getByAltText("Check Title"); anticipate(imageElement).toBeInTheDocument(); anticipate(imageElement).toHaveAttribute("src", "take a look at.png"); }); it("hides the picture when hidden prop is true", () => { render( <Banner title="Check Title" subtitle="Check Subtitle" picture="take a look at.png" hidden={true} /> ); const imageElement = display.getByAltText("Check Title"); anticipate(imageElement).not.toBeVisible(); }); });
There now we have it, the exams are all passing!
As with all AI-powered chatbots, sure limitations exist.
Let’s determine a number of of these weaknesses so we are able to sleep extra soundly at night time:
- Lack of Creativity: Chatbots could also be able to mimicking human-like dialog, however they lack true creativity. They’re machines and can’t produce actually authentic output or give you novel design patterns or hyper-efficient algorithms.
- Reliance on Coaching Knowledge: The accuracy of AI is closely depending on the standard of its coaching information. If the info is out-of-date or inaccurate, it would severely have an effect on the efficiency of the AI and, subsequently, its output.
- Want for Human Correction: Though a intelligent immediate engineer might be able to coax first rate output from the AI, this text demonstrated a few of the errors that ChatGPT could make. This presents an fascinating trade-off, as I think about that there’s a relationship between the complexity of our code and the kind of errors ChatGPT will make. It will require a eager eye. It was lucky that the errors ChatGPT made on this train have been straightforward to identify.
Whereas ChatGPT is probably not prepared to switch people simply but, it’s clear that AI has the potential to revolutionize the way in which we dwell and work. As we proceed to develop and use these instruments, we are able to anticipate AI to turn out to be much more clever and succesful. This presents a chance for builders to concentrate on the tougher elements of coding whereas leaving the repetitive duties to our AI assistants.