Build An Atom 1.9 Code Review Checklist
Introduction
In this comprehensive guide, we'll delve into the code review process for the 1.9 release of Build an Atom, a crucial step in ensuring the software's quality, accessibility, and adherence to PhET coding standards. This checklist, also known as the "CRC," is designed to help reviewers thoroughly examine the code, focusing on new features like alt-input and core descriptions while also identifying any general issues or deviations from established guidelines. By following this guide, we can collectively ensure that Build an Atom 1.9 is robust, user-friendly, and meets the high standards expected of PhET simulations.
This in-depth guide will walk you through each aspect of the code review, from initial setup and build checks to memory leak detection, performance evaluation, usability testing, internationalization considerations, repository structure analysis, adherence to coding conventions, and accessibility assessments. Whether you're a seasoned code reviewer or new to the process, this article will equip you with the knowledge and tools necessary to contribute effectively to the Build an Atom 1.9 release.
Pre-Review Setup: Essential GitHub Issues
Before diving into the code itself, it's crucial to ensure that several standard GitHub issues are in place and adequately addressed. If any of these issues are missing or incomplete, the code review should be paused until they are resolved by the responsible developer. This step ensures that the foundational aspects of the project are well-documented and aligned with the intended goals.
Model Documentation (model.md)
The first critical document is model.md, which provides a detailed description of the simulation's underlying model. This document should be written in a manner accessible to teachers and should accurately reflect the simulation's behavior. Reviewers should familiarize themselves with the model by thoroughly reading model.md. Key questions to consider include:
- Does the document adequately describe the model in terms appropriate for teachers?
- Has the model documentation been reviewed by the sim designer to ensure accuracy and completeness?
Implementation Notes (implementation-notes.md)
Next, implementation-notes.md offers an overview of the code's implementation, providing valuable insights for future maintainers. This document should highlight key design decisions, architectural patterns, and any potential challenges encountered during development. By reviewing this document, maintainers can gain a deeper understanding of the codebase, facilitating easier maintenance and updates in the future.
- Does the implementation notes provide a useful overview for future maintainers?
- Are there any critical implementation details that are missing or unclear?
Memory Testing Results
Memory testing is crucial for identifying and preventing memory leaks, which can significantly impact the simulation's performance and stability. The results of memory testing for both brands=phet and brands=phet-io (if applicable) should be documented in GitHub issues. Reviewers should carefully examine these results to ensure that the simulation is memory-efficient and does not exhibit any signs of memory leakage.
- Are the memory testing results for
brands=phetandbrands=phet-iodocumented? - Do the results indicate any potential memory leaks or performance issues?
Description Review, Polishing, and Sign-Off
If the simulation includes descriptive elements, a dedicated review process is necessary to ensure that these descriptions are accurate, clear, and effective. This involves polishing the descriptions and obtaining sign-off from relevant stakeholders. This step is particularly important for accessibility, as well-written descriptions can significantly enhance the experience for users with disabilities.
Credits
Finally, the credits for the simulation should be properly documented. This typically occurs after release candidate (RC) testing. Ensuring accurate and complete credits is essential for acknowledging the contributions of all individuals involved in the project.
Essential Build and Run Checks
Before delving into the intricacies of the code, it is paramount to perform a series of basic build and run checks. If any of these checks fail, the code review process should be immediately paused. These checks serve as a fundamental quality gate, ensuring that the simulation can be built, started, and run without encountering critical errors.
Build Process
The first step is to verify that the simulation builds without any warnings or errors. This involves executing the build process and carefully examining the output for any indications of problems. A clean build is a prerequisite for further analysis, as unresolved build issues can mask other underlying problems.
- Does the simulation build without warnings or errors?
File Size Analysis
The size of the HTML file can provide insights into the complexity and efficiency of the simulation. Reviewers should compare the file size to that of other similar simulations to determine if it seems reasonable. To further analyze the file size, tools like profile-file-size.ts can be used to break down the size into its constituent components. Unexpectedly large file sizes may indicate opportunities for optimization.
- Does the HTML file size seem reasonable compared to other similar simulations?
- Does the file size breakdown reveal any unexpected components or areas for optimization?
Startup Verification
Once built, the simulation should be able to start up without any issues. This check should be performed on both the unbuilt and built versions of the simulation to ensure consistency. Failure to start up may indicate critical errors or dependencies that are not being properly resolved.
- Does the simulation start up successfully in both unbuilt and built versions?
Assertion Failures
Assertion failures are runtime errors that indicate unexpected conditions or violations of program invariants. To identify such failures, the simulation should be run with the query parameter ea (enable assertions). Any assertion failures encountered should be thoroughly investigated and addressed, as they can signal underlying bugs or inconsistencies.
- Does the simulation experience any assertion failures when run with the
eaquery parameter?
Scenery Fuzz Testing
Scenery fuzz testing involves subjecting the simulation's graphical elements to random inputs and interactions to identify potential rendering issues or crashes. This is achieved by running the simulation with the query parameters fuzz&ea. A successful scenery fuzz test demonstrates the robustness and stability of the simulation's graphical components.
- Does the simulation pass a scenery fuzz test when run with the
fuzz&eaquery parameters?
Listener Order Shuffling
The order in which listeners are executed can sometimes influence the behavior of a simulation. To ensure that the simulation behaves correctly regardless of listener order, it should be run with the query parameters ea&listenerOrder=random and ea&listenerOrder=random&fuzz. This test helps identify potential dependencies on listener execution order, which can lead to unpredictable behavior.
- Does the simulation behave correctly when listener order is shuffled using the
ea&listenerOrder=randomandea&listenerOrder=random&fuzzquery parameters?
Deprecation Warnings
Deprecation warnings indicate the use of outdated or obsolete methods or features. While not necessarily indicative of immediate problems, deprecation warnings should be addressed to ensure the simulation's long-term maintainability and compatibility. The simulation should be run with the ?deprecationWarnings query parameter to identify any such warnings. New code should avoid the use of deprecated methods.
- Does the simulation output any deprecation warnings when run with the
?deprecationWarningsquery parameter?
Memory Leak Detection: Ensuring Long-Term Stability
Memory leaks are a critical concern in software development, particularly for simulations that may run for extended periods. A memory leak occurs when a program fails to release memory that it has allocated, leading to a gradual increase in memory consumption and potentially impacting performance or even causing crashes. Detecting and addressing memory leaks is therefore a crucial part of the code review process.
Heap Comparison Using Chrome Developer Tools
One effective method for detecting memory leaks is to perform a heap comparison using Chrome Developer Tools. This involves taking snapshots of the heap (the area of memory used for dynamic allocation) at different points in time and comparing them to identify objects that are being retained in memory longer than expected.
The process is described in detail here. This test should be performed on a version built using grunt --minify.mangle=false to ensure that object names are not mangled, making it easier to identify the source of leaks. The results should be compared to testing results done by the responsible developer.
- Does a heap comparison using Chrome Developer Tools indicate a memory leak?
Common-Code Component Disposal
Many simulations utilize common-code components (e.g., sun, scenery-phet, vegas) that may register observers or listeners internally. It is crucial to ensure that these components are properly disposed of when they are no longer needed, preventing memory leaks. Reviewers should examine the code to verify that the dispose function is called for each common-code component, or that there is a clear explanation of why disposal is not necessary.
- For each common-code component that opaquely registers observers or listeners, is there a call to that component’s
disposefunction, or is it obvious why it isn't necessary, or is there documentation about whydisposeisn't called?
Observer and Listener Leaks
Registering observers and listeners is a common pattern in event-driven programming, but it can also be a source of memory leaks if not handled carefully. When an observer or listener is registered, a reference to it is stored, preventing it from being garbage collected. If the observer or listener is no longer needed but the reference is not removed, a memory leak can occur.
To prevent observer and listener leaks, the following guidelines should be followed, unless there is clear documentation (in-line or in implementation-notes.md) explaining why they are not necessary:
- AXON:
Property.linkorlazyLinkshould be accompanied byunlink. - AXON:
Multilink.multilinkshould be accompanied byunmultilink. - AXON: Creation of
Multilinkshould be accompanied bydispose. - AXON: Creation of
DerivedPropertyshould be accompanied bydispose. - AXON:
Emitter.addListenershould be accompanied byremoveListener. - AXON:
ObservableArrayDef.element*Emitter.addListenershould be accompanied byObservableArrayDef.element*Emitter.removeListener - SCENERY:
Node.addInputListenershould be accompanied byremoveInputListener - TANDEM: Creation of an instrumented
PhetioObjectshould be accompanied bydispose.
Class Disposal Implementation
Any class that requires a dispose function should have one implemented correctly. There are several acceptable ways to implement dispose, including:
- Declaring a private function like
this.dispose{{CLASS_NAME}}()in the constructor, where{{CLASS_NAME}}should exactly match the class name. - Using
Disposable.ts/disposeEmitter. - Calling
super.dispose()andmyChild.dispose()etc.
Overriding Dispose Methods
Classes that do not properly override the dispose method should either:
- Use
isDisposable: falseif they inherit adisposemethod fromDisposable. - Implement a
disposemethod that callsDisposable.assertNotDisposableif they inherit adisposemethod from something other thanDisposable.
This prevents clients from calling dispose on objects that do not properly clean themselves up, which can lead to memory leaks.
Performance Evaluation: Ensuring Smooth and Responsive Interactions
Performance is a critical aspect of any interactive simulation. A simulation that is slow or unresponsive can frustrate users and hinder their learning experience. During the code review process, it is essential to evaluate the simulation's performance and identify any potential bottlenecks or areas for optimization.
Playtesting for Obvious Performance Issues
The first step in performance evaluation is simply to play with the simulation and identify any obvious performance issues. This may involve interacting with various elements, triggering animations, and observing the simulation's responsiveness. Key indicators of performance issues include:
- Animation that slows down with large numbers of objects.
- Animation that pauses or