E2606 Final Project Backend PR#344
Conversation
- app/helpers/importable_exportable_helper.rb - app/helpers/duplicated_action_helper.rb - app/services/export.rb - app/services/import.rb - app/services/field_mapping.rb - app/controllers/export_controller.rb - app/controllers/import_controller.rb Note: I've added some filler code so GitHub doesn't throw "Cannot upload empty files" error when commited.
Added new folders and files
…stly implemented external classes for creation as well.
…elating to the Questionnaire Item and Question Advice example.
… duplicate records in the try_import_records method. This method should be extracted to the service.
…r, everywhere kept refering to the question_id. This wouldn't work for importing, so I changed the schema to use item_id rather than question_id and changed all the references I found to use item_id as well.
…_item_id Update question id to item
- Added script to export AssignmentTeam records filtered by assignment_id - Included team members, emails, and member counts in detailed export - Improved export process explanation and cleaned up runner command usage - Ensured compatibility even without parent association by resolving assignment names manually
… the export controller to be able to send stuff to the frontend for export and index
# Conflicts: # app/helpers/importable_exportable_helper.rb
Import and Export Functionality
…orted classes. Additionally removed references to Sign_up_topic shim.
…Currently scoped to assignment.
…ests to support this functionality.
…ePackagesController. Added preview functionality for questionnaire imports in QuestionnairePackageImportService. Created QuestionnairePackageTemplateService to generate blank templates. Updated routes to include new template and preview actions.
…ced CSV to only require usernames
…ort-export-for-courses added backend implementation for course import/export
…uestionnaire-import-export Nadkarnik/updating questionnaire import export
…grades-export reworked grade export to live in the controller with minimal fields. …
…-participant-export Nadkarnik/assignment participant export
…-new-teams-topics-methods added comments to teams and topics files to newly added methods for a readability improvement.
…-importable-exportable-classes changed import and export controllers to explicitly list out the supported classes. Additionally removed references to Sign_up_topic shim.
…gnmentParticipant with context.
…ge-problems updating import/export functionality to support ProjectTopic and Assi…
…r of course and assignment participant import-export behavior using filename of export for context.
📝 WalkthroughWalkthroughAdds a comprehensive import/export subsystem: a DSL helper, FieldMapping, Import/Export services, duplicate-resolution strategies, questionnaire package services, new controllers/endpoints, many models opting into import/export, serializer adjustments, guarded migrations/schema updates, seed enhancements, and extensive RSpec coverage. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ImportCtrl as ImportController
participant ImportSvc as Import
participant Model as Model (e.g., User)
participant DupAction as DuplicateAction
participant DB as Database
Client->>ImportCtrl: POST /import/:class (file, headers, dup_action)
ImportCtrl->>ImportSvc: initialize(klass:, file:, headers:, dup_action:)
ImportSvc->>Model: try_import_records(file, headers, use_headers)
Model->>Model: normalize headers, build FieldMapping
loop per row
Model->>DB: lookup externals / uniqueness checks
alt no duplicate
Model->>DB: create/save record
else duplicate detected
Model-->>ImportSvc: return duplicate info
end
end
ImportSvc->>DupAction: on_duplicate_record(klass, duplicate_groups)
DupAction-->>ImportSvc: resolved records (or nil)
ImportSvc->>DB: save resolved records
ImportSvc-->>ImportCtrl: { imported, duplicates }
ImportCtrl-->>Client: 201 { result } or 422 { error }
sequenceDiagram
participant Client
participant ExportCtrl as ExportController
participant ExportSvc as Export
participant Model as Model (e.g., Team)
participant FieldMap as FieldMapping
participant DB as Database
Client->>ExportCtrl: POST /export/:class (ordered_fields?)
ExportCtrl->>ExportSvc: perform(export_class, ordered_headers)
ExportSvc->>Model: records = export_class.filter.call
Model->>DB: fetch records
ExportSvc->>FieldMap: FieldMapping.new(export_class, ordered_headers)
loop per record
FieldMap->>Model: values_for(record)
ExportSvc->>ExportSvc: append CSV row
end
ExportSvc-->>ExportCtrl: [{ name: ..., file: csv }]
ExportCtrl-->>Client: 200 { file } or 422 { error }
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Review rate limit: 1/3 review remaining, refill in 26 minutes and 22 seconds. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (26)
config/database.yml-18-20 (1)
18-20:⚠️ Potential issue | 🟠 MajorProduction database config ignores
DATABASE_URL, breaking Docker-based deployments.The
productionconfig now relies entirely on inherited hardcoded credentials and socket path, ignoring theDATABASE_URLenvironment variable that docker-compose.yml (and likely other infra) provides. The socket/var/run/mysqld/mysqld.sockis host-specific and won't work in containerized deployments; the connection will fail to reach the MySQL service in thedbcontainer.Suggested change
production: <<: *default - database: reimplementation_production + url: <%= ENV["DATABASE_URL"] %>Restore environment-driven database configuration to support containerized and flexible deployment paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/database.yml` around lines 18 - 20, The production DB config currently hardcodes inherited values and ignores the DATABASE_URL env var, which breaks containerized deployments; update the production section (the production mapping that inherits from *default) to prefer reading ENV['DATABASE_URL'] (or parse it) when present and fall back to the existing keys only when DATABASE_URL is absent, and remove reliance on the socket path for production so containers can use TCP connection info from DATABASE_URL; ensure the change preserves compatibility with the default anchor and Rails' expected config keys so ActiveRecord uses the env-provided URL in production.app/controllers/import_controller.rb-72-86 (1)
72-86:⚠️ Potential issue | 🟠 Major
Teamimports are missing the assignment context used by the metadata path.
import_metadata_forwrapsTeaminTeam.with_assignment_context(params[:assignment_id]), butimportfalls through to the generic branch. That means the same request can receive assignment-scoped import metadata and then execute the import against an unscoped team context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/import_controller.rb` around lines 72 - 86, The Team class is missing assignment scoping in the import path causing a mismatch with import_metadata_for; add an elsif branch for klass == Team that wraps the Import invocation in Team.with_assignment_context(params[:assignment_id]) (same pattern used for ProjectTopic and AssignmentParticipant) so the call to Import.new(klass: klass, file: uploaded_file, headers: ordered_fields, dup_action: dup_action&.new, defaults: defaults).perform(use_headers) runs inside the assignment-scoped block using params[:assignment_id].app/services/field_mapping.rb-65-77 (1)
65-77:⚠️ Potential issue | 🟠 MajorHeader normalization is too weak for the format this API advertises.
casecmp?only ignores case; it will not map"First Name"tofirst_name. Any header-based import that uses spaced labels instead of exact underscored field names will be dropped instead of matched.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/field_mapping.rb` around lines 65 - 77, The current FieldMapping.from_header only uses casecmp? and therefore misses headers like "First Name" vs "first_name"; update from_header to compare normalized forms: normalize header_row entries (strip, downcase, replace spaces/dashes with underscores and remove non-word chars) and also build a normalized map of model_class.internal_and_external_fields (map each field string to its normalized form), then match by comparing normalized header -> normalized field and return the original field names (so new(model_class, matched) still receives actual field names). Reference: FieldMapping.from_header, header_row, valid_fields, matched.app/services/export.rb-39-55 (1)
39-55:⚠️ Potential issue | 🟠 MajorRow values are emitted in a different order than
ordered_headers.This code builds rows as “all internal fields first, then external fields grouped by association”. If the requested header order mixes internal and external columns, the exported values will land under the wrong headers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/export.rb` around lines 39 - 55, Rows are built by concatenating internal fields then external fields which breaks the intended column order (ordered_headers); change row construction to iterate mapping.ordered_fields (the same order used for ordered_headers) and for each field emit the correct value: if the field is included in export_class.internal_fields call record.send(field); otherwise locate the corresponding external_class from Array(export_class.external_classes), fetch found_record = record.send(external_class.ref_class.name.underscore) and call found_record&.send(ExternalClass.unappended_class_name(external_class.ref_class, field)); ensure nil-safe calls so missing external associations produce nils and keep using export_class.filter to iterate records.app/models/user.rb-45-52 (1)
45-52:⚠️ Potential issue | 🟠 MajorUse
record.role_idinstead ofrecord.rolein the case statement.Accessing
record.role.idtriggers an unnecessary association lookup for every instantiation. Sincerole_idis already present on the record as a foreign key, use that directly. This avoids the extra query and prevents failures if the foreign key references a non-existent role row.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/user.rb` around lines 45 - 52, The case statement is triggering an association lookup by using record.role.id; change it to use the foreign key directly (record.role_id) to avoid extra queries and failures when the Role row is missing—update the case expression that checks Role::TEACHING_ASSISTANT_ID, Role::INSTRUCTOR_ID, Role::ADMINISTRATOR_ID, Role::SUPER_ADMINISTRATOR_ID so it switches on record.role_id and leaves the branch bodies (e.g., record.becomes(Ta), record.becomes(Instructor), record.becomes(Administrator)) unchanged.app/controllers/teams_controller.rb-121-124 (1)
121-124:⚠️ Potential issue | 🟠 Major
normalized_team_paramscan silently drop a providedparent_id.
team_params.to_his string-keyed, sopermitted[:parent_id]is alwaysnilhere. This method then adds a second:parent_identry, which can overwrite the original"parent_id"during mass-assignment and clear the parent on create/update.Suggested fix
def normalized_team_params - permitted = team_params.to_h - permitted[:parent_id] ||= permitted.delete('assignment_id') || permitted.delete(:assignment_id) + permitted = team_params.to_h.symbolize_keys + permitted[:parent_id] ||= permitted.delete(:assignment_id) permitted end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/teams_controller.rb` around lines 121 - 124, normalized_team_params currently uses team_params.to_h which yields string keys so the check permitted[:parent_id] is always nil and your later assignment can overwrite the original "parent_id" string entry; fix by normalizing keys consistently before reading/writing (e.g. convert team_params to symbolized keys or use with_indifferent_access) and then read/write the same key type; update normalized_team_params to call team_params.to_unsafe_h.symbolize_keys (or team_params.to_h.with_indifferent_access) and then set parent_id via the same key type (reference normalized_team_params and team_params).app/controllers/teams_controller.rb-34-42 (1)
34-42:⚠️ Potential issue | 🟠 Major
updatebypasses STI type validation.The
team_paramspermits:typefor both create and update operations, butvalidate_team_typeonly runs oncreate. A PATCH/PUT request can therefore change an existing team's STI type to any value, circumventing the allowlist validation added for creation (lines 117-118).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/teams_controller.rb` around lines 34 - 42, The update action currently allows changing the STI `type` because `:type` is permitted; either stop permitting `:type` on updates or ensure the same allowlist runs for updates. Modify the params/normalization logic (e.g., in normalized_team_params or team_params) so that `:type` is only permitted for create, or add the existing validate_team_type callback to run on update as well; reference the update method, normalized_team_params/team_params, and validate_team_type to locate where to remove `:type` or add the validation.app/services/change_offending_field_action.rb-55-83 (1)
55-83:⚠️ Potential issue | 🟠 MajorScoped uniqueness validators are not handled in this implementation.
The code extracts bare field names from validators and performs a global
klass.exists?(field => candidate)check. For validators configured withscope:(e.g.,uniqueness: { scope: :team_id }), this will incorrectly rename values that exist in other scopes and does not account for the scoped context that caused the actual uniqueness collision.While currently importable models in the codebase don't use scoped uniqueness validators, this is a latent bug. If a model with scoped uniqueness validators were made importable, the strategy would break. The fix requires extracting scope information from the validator's
options[:scope]and incorporating it into the uniqueness check logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/change_offending_field_action.rb` around lines 55 - 83, The uniqueness handling must include validator scopes: update unique_constraint_fields to extract each UniquenessValidator's options[:scope] (normalize to an array) and return the scoped information (e.g., field => scopes), then change generate_unique_value to accept a scope_values hash (or look up scope column names on the klass via the returned scopes) and include those in the existence check instead of a global klass.exists?(field => candidate); build the exists? query as klass.exists?(field => candidate, scope_col1 => record[scope_col1], ...) so scoped uniqueness collisions are checked within the same scope(s) extracted from the validator options[:scope].db/seeds.rb-355-357 (1)
355-357:⚠️ Potential issue | 🟠 MajorMake the new seed data idempotent on reruns.
Once any earlier
create!raisesActiveRecord::RecordInvalid, control jumps here and onlyseed_assignment_gradesruns. On a non-empty database, the new project topics, course participants, questionnaires, items, and answers added in this PR will never be backfilled, sodb:seedcan leave the dataset in a half-upgraded state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/seeds.rb` around lines 355 - 357, The rescue around ActiveRecord::RecordInvalid causes early exit so only seed_assignment_grades runs; make the new seed data idempotent by replacing destructive create! calls with idempotent upserts (e.g., find_or_create_by / find_or_initialize_by + save! or upsert_all) for the project topics, course participants, questionnaires, items, and answers added in this PR, and/or move the rescue so it does not abort the remainder of the seeding flow; ensure the code paths that currently call create! (identify those create! invocations) are converted to safe idempotent helpers and that seed_assignment_grades and other seed functions are always invoked even when records already exist.app/models/answer.rb-7-10 (1)
7-10:⚠️ Potential issue | 🟠 MajorUse stable association keys for answer import/export.
Item.seqis only unique within a questionnaire, andResponse.additional_commentis free text. Withfilter nil, the importer has no deterministic way to resolve either association across the full table, so answers can attach to the wrong item or response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/answer.rb` around lines 7 - 10, The importer is using unstable association keys (ExternalClass.new(Item, true, false, :seq) and ExternalClass.new(Response, true, false, :additional_comment)) with filter nil, so associations can resolve incorrectly; update these to use stable, unique keys (for example replace Item's key with a composite or global identifier such as [:questionnaire_id, :seq] or a persistent Item.uuid, and replace Response's key with a stable field like response_uuid or id instead of additional_comment) and remove filter nil by adding an appropriate deterministic filter/scoping (e.g., scope by questionnaire_id or response_uuid) so ExternalClass and filter use those stable identifiers when importing/exporting answers.spec/integration/import_controller_spec.rb-95-101 (1)
95-101:⚠️ Potential issue | 🟠 MajorDon’t bake raw constantization failures into the API contract.
An invalid
:classvalue is client input, not a server fault. Expecting500plus"uninitialized constant"locks in an internal exception path and leaks implementation details; this should be a 4xx validation error, ideally consistent with the export endpoint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/integration/import_controller_spec.rb` around lines 95 - 101, Test is asserting a 500 and leaking "uninitialized constant" from raw Ruby constantization; change ImportController to treat invalid :class input as a client validation error (4xx) and not raise internal exceptions. Replace any direct constantize calls with safe_constantize (or rescue NameError) when resolving params[:class] in the ImportController action (e.g., the controller method handling GET "/import/:class"), return the same 4xx status used by the export endpoint (e.g., 400 or 422) and a generic validation error message, and ensure the response body does not include the raw exception text.spec/integration/import_controller_spec.rb-103-109 (1)
103-109:⚠️ Potential issue | 🟠 MajorA missing upload should not count as a successful import.
This expectation makes the API return
201even when nocsv_filewas provided, so callers can get a success response with nothing imported. The controller and this spec should reject that request with a validation error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/integration/import_controller_spec.rb` around lines 103 - 109, The spec and controller currently allow a nil csv_file and treat it as a successful import; update the ImportController#create (POST "/import/:model") to validate presence of params[:csv_file] and return a 422 with a descriptive validation error when missing, and change the spec example ("returns 201 even if csv_file is missing (controller allows nil file)") to post without csv_file and assert response.status == 422 and that JSON error message indicates csv_file is required; ensure the controller uses the same error shape the spec expects.app/models/questionnaire.rb-5-7 (1)
5-7:⚠️ Potential issue | 🟠 MajorUse a unique instructor identifier for questionnaire import/export.
Instructor.nameis not a stable key, so:instructor_namecan resolve to the wrong record when two instructors share the same name. A unique field such as email is much safer for round-tripping these records.Possible fix
- mandatory_fields :name, :min_question_score, :max_question_score, :questionnaire_type, :display_type, :instruction_loc, :instructor_name + mandatory_fields :name, :min_question_score, :max_question_score, :questionnaire_type, :display_type, :instruction_loc, :instructor_email hidden_fields :id, :created_at, :updated_at - external_classes ExternalClass.new(Instructor, true, false, :name) + external_classes ExternalClass.new(Instructor, true, false, :email)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/questionnaire.rb` around lines 5 - 7, The import/export currently uses Instructor.name as the external key (see external_classes ExternalClass.new(Instructor, true, false, :name) and the :instructor_name mandatory field), which is not stable; change the questionnaire export/import to use a unique identifier such as instructor email by: add/rename the field to :instructor_email in the mandatory_fields list, update external_classes to ExternalClass.new(Instructor, true, false, :email), and ensure any import/export mapping logic in the Questionnaire model uses Instructor.email to find/create the associated Instructor record instead of Instructor.name.app/serializers/team_serializer.rb-4-4 (1)
4-4:⚠️ Potential issue | 🟠 Major
sign_up_topicandsigned_up_teamwill not be serialized without being declared in attributes.ActiveModel::Serializer (0.10.0) only emits attributes and associations that are explicitly declared. Both
sign_up_topic(line 25) and its helper methodsigned_up_team(line 30) are defined but missing from theattributesdeclaration on line 4, so neither will appear in the API response. Add them to the attributes list:attributes :id, :name, :type, :team_size, :parent_id, :assignment_id, :sign_up_topicAlso applies to: 24-31
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/serializers/team_serializer.rb` at line 4, The serializer defines methods sign_up_topic and signed_up_team but only declared attributes :id, :name, :type, :team_size, :parent_id, :assignment_id, so sign_up_topic (and its helper signed_up_team) will not be emitted; update the attributes declaration in TeamSerializer to include :sign_up_topic (i.e. add sign_up_topic to the attributes list) so ActiveModel::Serializer will serialize the sign_up_topic method's output (ensure the method name sign_up_topic remains defined and used as the attribute).db/migrate/20260328170000_rename_sign_up_topics_to_project_topics.rb-14-19 (1)
14-19:⚠️ Potential issue | 🟠 MajorNormalize the
project_topics.assignment_idindex in both directions.If
index_project_topics_on_assignment_idalready exists, these guards skip the rename and leavefk_sign_up_categories_sign_up_topicsbehind.db/schema.rbalready shows both indexes on the same column, so this migration is not converging and adds duplicate index write/storage cost.🛠️ One way to make the migration converge
- if index_name_exists?(:project_topics, "fk_sign_up_categories_sign_up_topics") && - !index_name_exists?(:project_topics, "index_project_topics_on_assignment_id") - rename_index :project_topics, - "fk_sign_up_categories_sign_up_topics", - "index_project_topics_on_assignment_id" - end + if index_name_exists?(:project_topics, "fk_sign_up_categories_sign_up_topics") + if index_name_exists?(:project_topics, "index_project_topics_on_assignment_id") + remove_index :project_topics, name: "fk_sign_up_categories_sign_up_topics" + else + rename_index :project_topics, + "fk_sign_up_categories_sign_up_topics", + "index_project_topics_on_assignment_id" + end + endApply the inverse cleanup in
downas well.Also applies to: 33-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/migrate/20260328170000_rename_sign_up_topics_to_project_topics.rb` around lines 14 - 19, The migration leaves both indexes on project_topics.assignment_id when one already exists; update the up and down methods in 20260328170000_rename_sign_up_topics_to_project_topics.rb to ensure convergence by explicitly removing the old index name or the duplicate index before/after renaming: in the up path, if "index_project_topics_on_assignment_id" exists then remove "fk_sign_up_categories_sign_up_topics" (or skip creating a duplicate), otherwise perform rename_index(:project_topics, "fk_sign_up_categories_sign_up_topics", "index_project_topics_on_assignment_id"); mirror the inverse in the down path (rename back or remove the "index_project_topics_on_assignment_id" to restore "fk_sign_up_categories_sign_up_topics") so the migration is idempotent and doesn’t leave both indexes present.app/services/import.rb-56-79 (1)
56-79:⚠️ Potential issue | 🟠 Major
importednever reflects successful inserts.
successful_insertsis initialized but never incremented, so this method always returnsimported: 0even when rows were created or duplicate resolutions were saved.🧮 Minimal fix if a count delta is acceptable here
def perform(use_headers) + before_count = `@klass.count` duplicate_groups = [] # Will hold duplicate row sets - successful_inserts = 0 # Counter for successful saves # Call the model-level importer (defined in each model using the import mixin) dups = `@klass.try_import_records`( @@ # Return summary of import results { - imported: successful_inserts, + imported: `@klass.count` - before_count, duplicates: duplicate_groups.length } end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/import.rb` around lines 56 - 79, The perform method initializes successful_inserts but never updates it, so update the call to `@klass.try_import_records` (and/or the try_import_records implementation) to return both duplicates and the number of records inserted (e.g., return a tuple or hash like {duplicates: dups, inserted: n}), then in perform destructure that result (capture inserted_count) and add it to successful_inserts before building the return hash; reference perform, `@klass.try_import_records`, successful_inserts, duplicate_groups, normalize_duplicate and process_duplicates when making the change so the final returned imported value reflects actual inserted rows.app/controllers/grades_controller.rb-267-283 (1)
267-283:⚠️ Potential issue | 🟠 MajorEscape spreadsheet formulas before writing user data to CSV.
username,comment, and optional=,+,-, or@will be evaluated as formulas in Excel/Sheets, so this export endpoint is vulnerable to CSV injection.🔐 Suggested hardening
def grades_csv_for(assignment, include_email: false) headers = GRADES_EXPORT_HEADERS + (include_email ? GRADES_EXPORT_OPTIONAL_HEADERS : []) CSV.generate(headers: true) do |csv| csv << headers assignment.participants.includes(:user).find_each do |participant| team = participant.team row = [ - participant.user_name, - participant.grade || team&.grade_for_submission, - team&.comment_for_submission + csv_safe(participant.user_name), + csv_safe(participant.grade || team&.grade_for_submission), + csv_safe(team&.comment_for_submission) ] - row << participant.user&.email if include_email + row << csv_safe(participant.user&.email) if include_email csv << row end end endAdd a small helper near the other private methods:
def csv_safe(value) return value if value.nil? string = value.to_s string.match?(/\A[=\-+@]/) ? "'#{string}" : string end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/grades_controller.rb` around lines 267 - 283, The CSV export is vulnerable to Excel/Sheets formula injection: wrap all user-controlled values (participant.user_name, participant.user&.email, team&.comment_for_submission and any other string fields used in grades_csv_for) with a sanitizer helper (e.g., add a private method csv_safe(value) that returns nil for nil, converts to string and prefixes with a single-quote when the string begins with =, +, -, or @); then call csv_safe(...) for each value inserted into the CSV rows inside grades_csv_for so exported cells are escaped before writing.app/controllers/export_controller.rb-20-27 (1)
20-27:⚠️ Potential issue | 🟠 MajorDon’t return raw exception text from controller rescues.
These
rescue StandardErrorbranches surfacee.messagedirectly to API callers. Unexpected failures here can leak SQL, class names, and validation internals. Rescue only expected input errors as422, and return a generic500for everything else.Also applies to: 71-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/export_controller.rb` around lines 20 - 27, In the index action, stop returning raw e.message for all exceptions: rescue only expected input/argument errors (e.g. ArgumentError raised by resolve_export_class) and return a 422 with the specific message, but for any other StandardError return a generic 500 response with a non-sensitive message; update the rescue block in index (and the similar rescue at lines ~71-72) to explicitly rescue ArgumentError (or other known input errors) and add a second rescue for StandardError that renders a generic "Internal server error" without exposing exception details while still logging the original exception server-side.app/models/team.rb-292-299 (1)
292-299:⚠️ Potential issue | 🟠 MajorMentored teams lose their STI type on import.
export_scopeincludesMentoredTeam, butfind_or_build_import_teamalways writestype: 'AssignmentTeam'. Exporting then re-importing mentored teams will silently change their subtype and any subtype-specific behavior. Either excludeMentoredTeamfrom this path or carry the type through the CSV/import defaults.Safe short-term fix if mentored-team import is not supported
def export_scope - scope = where(type: %w[AssignmentTeam MentoredTeam]) + scope = where(type: 'AssignmentTeam') import_export_assignment_id.present? ? scope.where(parent_id: import_export_assignment_id) : scope endAlso applies to: 338-340
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/team.rb` around lines 292 - 299, find_or_build_import_team currently hardcodes type: 'AssignmentTeam', which strips MentoredTeam STI; change it to read a safe type from the import (e.g. type_value = row_hash['type'].presence || 'AssignmentTeam') and pass type: type_value to find_or_initialize_by, but validate/type-check type_value against the allowed STI subclasses (e.g. 'AssignmentTeam' and 'MentoredTeam') to prevent arbitrary class injection; apply the same pattern to the similar code referenced around the other occurrence (the block at the other import/location mentioned).app/controllers/questionnaire_packages_controller.rb-30-40 (1)
30-40:⚠️ Potential issue | 🟠 MajorAvoid echoing internal exception messages from package endpoints.
All four actions rescue
StandardErrorand returne.messageto the client. That will expose implementation details for unexpected failures; keep422responses for known input errors and log a generic500for everything else.Also applies to: 43-63, 67-80, 83-96
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/questionnaire_packages_controller.rb` around lines 30 - 40, The template action (and the other package endpoints in QuestionnairePackagesController that currently rescue StandardError and render e.message) must stop echoing internal exception text to clients: replace the broad rescue StandardError => e with specific rescues for known input/validation errors (e.g., ActionController::ParameterMissing, ActiveRecord::RecordInvalid, your service's custom validation error) that render status :unprocessable_entity and may include safe, minimal error info; for all other exceptions log the full error server-side (e.g., Rails.logger.error or use logger.error with e.full_message) and render a generic JSON error like { error: 'Internal server error' } with status :internal_server_error; make this change in the template action and the other analogous actions that currently render e.message.app/services/questionnaire_package_import_service.rb-415-470 (1)
415-470:⚠️ Potential issue | 🟠 MajorUpdate imports currently duplicate child rows on every re-run.
When
dup_actionreuses an existing questionnaire viaUpdateExistingRecordAction,import_itemsandimport_question_advicesstill always insert new rows. Re-importing the same package will keep cloning items/advice into that questionnaire, and preview currently reports those rows ascreate. Resolve existing children by a natural key before inserting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/questionnaire_package_import_service.rb` around lines 415 - 470, import_items and import_question_advices always create new child rows causing duplicates on re-import; change both to first try resolving existing records by their natural keys before creating: in import_items, try questionnaire.items.find_by(seq: row['seq'], txt: row['txt']) and if found update its attributes (txt, weight, question_type, size, alternatives, break_before, min_label, max_label, seq handling) and reuse that item instead of Item.new, only increment imported_counts[:items] when a new record is created; in import_question_advices, try item.question_advices.find_by(score: row['score'], advice: row['advice']) (or a suitable unique key) before building a new QuestionAdvice, update existing advice if found, and only increment imported_counts[:question_advices] when creating a new record; keep using imported_items map for lookups when applicable.app/models/assignment_participant.rb-109-110 (1)
109-110:⚠️ Potential issue | 🟠 MajorFail fast when the assignment context is invalid.
This path only checks that
assignment_idis present. A stale or mistyped ID will either create orphaned participants or fail later with a less clear database error. Validate that the assignment exists before reading the CSV.Suggested guard
assignment_id = import_export_assignment_id raise StandardError, 'assignment_id is required for participant import' if assignment_id.blank? + raise StandardError, "Assignment '#{assignment_id}' was not found." unless Assignment.exists?(assignment_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/assignment_participant.rb` around lines 109 - 110, The code only checks assignment_id presence but not that the Assignment exists, so add a guard that looks up the Assignment (e.g., assignment = Assignment.find_by(id: assignment_id) or similar using the Assignment model) after computing import_export_assignment_id and before proceeding, and raise a StandardError with a clear message if the lookup returns nil to fail fast and avoid creating orphaned participants; update any subsequent logic to use the found assignment variable instead of relying solely on assignment_id.app/controllers/export_controller.rb-39-64 (1)
39-64:⚠️ Potential issue | 🟠 MajorRequire assignment/course scope for scoped exports.
For
Team,AssignmentParticipant,CourseParticipant, andProjectTopic, a blankassignment_id/course_idmakes the model filter fall back toall, so one missing client param exports data across every assignment/course instead of failing closed. Validate the required scope id before entering thewith_*_contextblock.Suggested guard
klass = resolve_export_class(params[:class]) raise ArgumentError, "Unsupported export class: #{params[:class]}" if klass.nil? + + if [Team, AssignmentParticipant, ProjectTopic].include?(klass) && params[:assignment_id].blank? + render json: { error: 'assignment_id is required for this export' }, status: :unprocessable_entity + return + end + + if klass == CourseParticipant && params[:course_id].blank? + render json: { error: 'course_id is required for this export' }, status: :unprocessable_entity + return + end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/export_controller.rb` around lines 39 - 64, The scoped exports for Team, AssignmentParticipant, CourseParticipant, and ProjectTopic call with_assignment_context/with_course_context but do not validate params[:assignment_id] or params[:course_id], so a blank param falls back to exporting all records; add a guard before each scoped branch in export_controller (where resolve_export_class, Team, AssignmentParticipant, CourseParticipant, ProjectTopic, with_assignment_context, with_course_context, and named_export_files are used) to require a non-blank assignment_id or course_id as appropriate and raise ArgumentError (or return a 4xx response) when the required id is missing, so the code never enters the with_*_context block without a valid scope id.app/helpers/importable_exportable_helper.rb-355-359 (1)
355-359:⚠️ Potential issue | 🟠 MajorDon’t create an external row after lookup already found one.
import_rowfirst callslook_up_external_class, then unconditionally callscreate_external_classfor everyshould_createexternal class. For configs likeExternalClass.new(User, true, true, :email), a successful lookup still triggers a second create attempt, which can duplicate related rows or turn a valid import into a uniqueness failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/helpers/importable_exportable_helper.rb` around lines 355 - 359, import_row currently calls create_external_class for every external_class unconditionally even after look_up_external_class found an existing external row; modify import_row to skip calling create_external_class when look_up_external_class returned a non-nil/existing external record for that external_class (i.e., only call create_external_class when the lookup result is nil and external_class.should_create is true). Locate the import_row method and its use of look_up_external_class and create_external_class, check the external_classes iteration, and add a conditional around create_external_class to only create when no existing external was returned by look_up_external_class.app/helpers/importable_exportable_helper.rb-225-227 (1)
225-227:⚠️ Potential issue | 🟠 Major
hidden_fieldsnever actually hides anything here.
internal_fieldsis built from rawcolumn_names, so fields marked withhidden_fieldsstill flow into mapping, import, and export throughinternal_fields/internal_and_external_fields. That defeats the DSL and can leak or overwrite columns that callers expected to be excluded.Suggested fix
def internal_fields - (column_names + (mandatory_fields || [])).uniq - external_fields + (column_names + (mandatory_fields || [])).uniq - hidden_fields - external_fields end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/helpers/importable_exportable_helper.rb` around lines 225 - 227, internal_fields currently builds from raw column_names causing fields listed in hidden_fields to still be included; update internal_fields (and any dependent method like internal_and_external_fields) to exclude hidden_fields by subtracting (hidden_fields || []) from column_names before merging with mandatory_fields and removing external_fields, and ensure hidden_fields defaults to an array if nil so hidden entries are reliably filtered out.app/helpers/importable_exportable_helper.rb-371-375 (1)
371-375:⚠️ Potential issue | 🟠 MajorFix association setter derivation for multiword model names.
Using
.name.downcaseon CamelCase model names likeQuizItemproducesquiziteminstead ofquiz_item, causing the setter to fail. This affects any multiword model (e.g.,AssignmentParticipant).The same issue exists in
app/services/questionnaire_package_import_service.rband should be fixed there as well.Use
model_name.elementto properly convert to snake_case attribute names:Fix
def look_up_external_class(row_hash, external_class, parent_obj) if external_class.should_look_up && (found = external_class.look_up(row_hash)) - parent_obj.send("#{external_class.ref_class.name.downcase}=", found) + association_name = external_class.ref_class.model_name.element + parent_obj.public_send("#{association_name}=", found) nil end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/helpers/importable_exportable_helper.rb` around lines 371 - 375, The association setter derivation uses CamelCase.name.downcase which breaks multiword models (e.g., QuizItem -> quizitem); update look_up_external_class in ImportableExportableHelper (method look_up_external_class) to derive the attribute name via the model's ActiveModel name element (use external_class.ref_class.model_name.element) instead of external_class.ref_class.name.downcase, and make the identical change in questionnaire_package_import_service.rb where multiword model names are converted to setter names so the send("#{...}=") call receives the correct snake_case attribute.
🟡 Minor comments (7)
db/migrate/20250321222753_rename_sign_up_topic_to_project_topic_in_signed_up_teams.rb-7-10 (1)
7-10:⚠️ Potential issue | 🟡 MinorFix
index_exists?call to check for index by name, not column.Line 7 passes the index name as the second positional argument, but that parameter is interpreted as
column_name. This causes the guard to fail and skip the rename.Use the
name:keyword option to check for the index:Proposed fix
- if index_exists?(:signed_up_teams, :index_signed_up_teams_on_sign_up_topic_id) + if index_exists?(:signed_up_teams, nil, name: :index_signed_up_teams_on_sign_up_topic_id) rename_index :signed_up_teams, :index_signed_up_teams_on_sign_up_topic_id, :index_signed_up_teams_on_project_topic_id endAlternatively, use
index_name_exists?(:signed_up_teams, :index_signed_up_teams_on_sign_up_topic_id)for clearer intent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/migrate/20250321222753_rename_sign_up_topic_to_project_topic_in_signed_up_teams.rb` around lines 7 - 10, The guard using index_exists?(:signed_up_teams, :index_signed_up_teams_on_sign_up_topic_id) is passing the index name as a column name so the rename is skipped; update the check to look up the index by name — either use index_exists?(:signed_up_teams, name: :index_signed_up_teams_on_sign_up_topic_id) or replace it with index_name_exists?(:signed_up_teams, :index_signed_up_teams_on_sign_up_topic_id) so the subsequent rename_index :signed_up_teams, :index_signed_up_teams_on_sign_up_topic_id, :index_signed_up_teams_on_project_topic_id executes when the index actually exists.spec/requests/api/v1/import_spec.rb-40-40 (1)
40-40:⚠️ Potential issue | 🟡 MinorRemove debug output from request spec.
Line 40 (
pp data) is leftover debug noise and should be removed to keep CI logs clean.Suggested fix
- pp data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/import_spec.rb` at line 40, Remove the leftover debug statement "pp data" from the request spec (delete the pp data line); ensure no other debug printing (pp/puts) remains in the spec so test output stays clean and CI logs are not noisy.app/controllers/participants_controller.rb-163-166 (1)
163-166:⚠️ Potential issue | 🟡 MinorGuard nil
courseinfilter_course_participantsto avoid 500s.When
courseis nil, Line 165 calls.orderon nil and raises. Return an empty relation instead.Suggested fix
def filter_course_participants(course) - participants = Participant.where(parent_id: course.id, type: 'CourseParticipant') if course - participants.order(:id) + return Participant.none unless course + + Participant.where(parent_id: course.id, type: 'CourseParticipant').order(:id) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/participants_controller.rb` around lines 163 - 166, The method filter_course_participants currently calls .order on participants even when course is nil; change it so participants is an ActiveRecord relation in all cases (e.g. set participants = course ? Participant.where(parent_id: course.id, type: 'CourseParticipant') : Participant.none) and then call participants.order(:id); update the method (filter_course_participants) to return Participant.none when course is nil instead of letting .order be invoked on nil.spec/requests/api/v1/export_spec.rb-43-43 (1)
43-43:⚠️ Potential issue | 🟡 MinorRemove debug output from request spec.
Line 43 (
pp data) should be removed to avoid noisy test output.Suggested fix
- pp data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/export_spec.rb` at line 43, Remove the stray debug print statement `pp data` from the request spec (the test that prints response `data`) so tests aren't noisy; find and delete the `pp data` line in the spec (spec/requests/api/v1/export_spec.rb) and run the test suite to confirm no leftover debugging output remains.spec/helpers/import_export_spec.rb-10-33 (1)
10-33:⚠️ Potential issue | 🟡 MinorUse
before(:each)instead ofbefore(:all)for shared test setup.While DatabaseCleaner's transaction strategy provides isolation for changes made within each example, using
before(:all)with count-based assertions makes the tests harder to reason about and maintain. If the transaction cleanup ever fails or changes are made outside transaction scope, records can leak between examples. Switch the setup tobefore(:each)orlet!to make test isolation explicit and avoid relying on transaction cleanup for correctness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/helpers/import_export_spec.rb` around lines 10 - 33, The test setup currently runs in a before(:all) block creating `@roles` (via create_roles_hierarchy), `@institution` (Institution.create!), `@instructor` (Instructor.create!), and `@questionnaire` (Questionnaire.create!), which can leak state between examples; change the hook to before(:each) (or replace with let!/let blocks) so each example gets a fresh setup: move the creations into a before(:each) block or convert `@roles/`@institution/@instructor/@questionnaire into let! helpers to ensure per-example isolation and avoid relying on transaction cleanup.app/models/assignment_participant.rb-159-160 (1)
159-160:⚠️ Potential issue | 🟡 MinorUse the same handle fallback as the normal participant path.
handleis not part of this CSV contract, sorow_hash['handle']is always blank and imports always fall back touser.name. That diverges fromset_handleand fromCourseParticipantimports, so the same user can get different handles depending on creation path.Suggested fix
- participant.handle = row_hash['handle'].presence || participant.handle || user.name + participant.handle = participant.handle.presence || user.handle.presence || user.name🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/assignment_participant.rb` around lines 159 - 160, The CSV import currently assigns participant.handle using row_hash['handle'].presence which is always blank and falls back to user.name, causing inconsistency with the normal participant path; instead invoke the same handle fallback logic by calling the existing set_handle helper (or reusing the CourseParticipant import logic) so the handle is computed the same way — replace the direct assignment to participant.handle with using set_handle (or the CourseParticipant import routine) before participant.save! to ensure consistent handle selection.app/helpers/importable_exportable_helper.rb-262-264 (1)
262-264:⚠️ Potential issue | 🟡 Minor
to_hashis defined at module level and will be a class method when modelsextendthis helper, making it unsuitable for instance use.The method body calls
send(f)expecting instance context, but withextend, it becomes a class method. The default parameterself.class.internal_fieldswould resolve toClass.internal_fieldswhen invoked.Note: The method is currently unused in the codebase—no calls to
to_hashwere found. However, the scoping issue should be corrected before the method is used.Suggested fix
module ImportableExportableHelper + module InstanceMethods + def to_hash(fields = self.class.internal_fields) + fields.to_h { |f| [f, public_send(f)] } + end + end + def self.extended(base) + base.include InstanceMethods if base.superclass.respond_to?(:mandatory_fields) base.instance_variable_set(:`@mandatory_fields`, base.superclass.mandatory_fields) base.instance_variable_set(:`@external_classes`, base.superclass.external_classes) base.instance_variable_set(:`@class_name`, base.superclass.name) base.instance_variable_set(:`@available_actions_on_duplicate`, base.superclass.available_actions_on_duplicate) @@ - def to_hash(fields = self.class.internal_fields) - fields.to_h { |f| [f, send(f)] } - end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/helpers/importable_exportable_helper.rb` around lines 262 - 264, to_hash is currently defined in a way that forces its default param to resolve in class context (breaking send(f) when the module is extended); change the signature to accept a nil default (def to_hash(fields = nil)) and inside the method set fields ||= self.class.internal_fields and then return fields.to_h { |f| [f, send(f)] } so the method works correctly as an instance method regardless of whether the module is included or extended; update callers or module usage if the module was intentionally extended so that to_hash is invoked on instances, not on the class.
🧹 Nitpick comments (4)
app/services/skip_record_action.rb (1)
14-15: Mark intentionally-unused parameters explicitly.Prefixing these args with
_improves clarity and avoids unused-argument lint noise.💡 Proposed tweak
- def on_duplicate_record(klass, - records) + def on_duplicate_record(_klass, + _records) nil end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/skip_record_action.rb` around lines 14 - 15, The method on_duplicate_record currently declares parameters klass and records but they're intentionally unused; rename them to _klass and _records (or prefix only the unused ones with an underscore) in the on_duplicate_record method signature to mark them explicitly as intentionally-unused and suppress unused-argument lint warnings while leaving behavior unchanged.app/models/quiz_item.rb (1)
8-8: Wrap Line 8 for readability and RuboCop compliance.Line 8 exceeds the configured max length; splitting the association options across lines will keep this clean.
Suggested cleanup
- has_many :quiz_question_choices, class_name: 'QuizQuestionChoice', foreign_key: 'item_id', inverse_of: false, dependent: :nullify + has_many :quiz_question_choices, + class_name: 'QuizQuestionChoice', + foreign_key: 'item_id', + inverse_of: false, + dependent: :nullify🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/quiz_item.rb` at line 8, The has_many association declaration for :quiz_question_choices is too long; break the options hash onto multiple lines for readability and RuboCop compliance by placing each option (class_name: 'QuizQuestionChoice', foreign_key: 'item_id', inverse_of: false, dependent: :nullify) on its own line aligned under the has_many call (i.e., refactor the has_many :quiz_question_choices line into a multi-line argument list).app/serializers/user_serializer.rb (1)
35-37: Avoid hardcodingdate_format_preftonil.This field will always serialize as
nullregardless of model state. Either delegate to the persisted preference when it exists, or remove the attribute until the backend can supply a real value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/serializers/user_serializer.rb` around lines 35 - 37, Replace the hardcoded nil in the UserSerializer#date_format_pref method so it either returns the persisted preference from the serialized model (e.g., delegate to object.date_format_pref or a safe accessor like object&.date_format_pref) or remove the attribute from the serializer entirely; update the date_format_pref method (or remove its attribute declaration) so the serializer no longer always emits null.app/services/questionnaire_package_export_service.rb (1)
56-58: Preload:instructorin the export scope.
questionnaire_rows,item_rows, andquestion_advice_rowsall readquestionnaire.instructor&.name, but this relation only eager-loads items/advice. Large exports will still issue one instructor query per questionnaire.♻️ Suggested change
- exportable_questionnaires = questionnaire_scope - .includes(items: :question_advices) + exportable_questionnaires = questionnaire_scope + .includes(:instructor, items: :question_advices) .order(:id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/questionnaire_package_export_service.rb` around lines 56 - 58, The export scope currently eager-loads items and question_advices via exportable_questionnaires = questionnaire_scope.includes(items: :question_advices).order(:id) but code paths questionnaire_rows, item_rows, and question_advice_rows call questionnaire.instructor&.name and will trigger an N+1; update the includes on exportable_questionnaires (or questionnaire_scope) to also preload the instructor association (e.g., include :instructor alongside items: :question_advices) so instructor data is fetched in the same query and eliminates per-questionnaire queries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 0a42be59-aa2e-40b1-9a83-cb7112899ea7
⛔ Files ignored due to path filters (12)
spec/fixtures/files/empty.csvis excluded by!**/*.csvspec/fixtures/files/empty_with_headers.csvis excluded by!**/*.csvspec/fixtures/files/import_test.csvis excluded by!**/*.csvspec/fixtures/files/multiple_users_no_headers.csvis excluded by!**/*.csvspec/fixtures/files/multiple_users_with_headers.csvis excluded by!**/*.csvspec/fixtures/files/questionnaire_item_with_headers.csvis excluded by!**/*.csvspec/fixtures/files/single_user_email_invalid.csvis excluded by!**/*.csvspec/fixtures/files/single_user_no_headers.csvis excluded by!**/*.csvspec/fixtures/files/single_user_role_doe_not_exist.csvis excluded by!**/*.csvspec/fixtures/files/single_user_with_headers.csvis excluded by!**/*.csvspec/fixtures/files/single_user_with_headers_changed.csvis excluded by!**/*.csvspec/fixtures/files/users_duplicate_records.csvis excluded by!**/*.csv
📒 Files selected for processing (58)
app/controllers/export_controller.rbapp/controllers/grades_controller.rbapp/controllers/import_controller.rbapp/controllers/participants_controller.rbapp/controllers/project_topics_controller.rbapp/controllers/questionnaire_packages_controller.rbapp/controllers/questionnaires_controller.rbapp/controllers/teams_controller.rbapp/helpers/importable_exportable_helper.rbapp/models/Item.rbapp/models/answer.rbapp/models/assignment_participant.rbapp/models/course.rbapp/models/course_participant.rbapp/models/multiple_choice_checkbox.rbapp/models/multiple_choice_radio.rbapp/models/project_topic.rbapp/models/question_advice.rbapp/models/questionnaire.rbapp/models/quiz_item.rbapp/models/role.rbapp/models/team.rbapp/models/user.rbapp/serializers/project_topic_serializer.rbapp/serializers/team_serializer.rbapp/serializers/user_serializer.rbapp/services/change_offending_field_action.rbapp/services/export.rbapp/services/field_mapping.rbapp/services/import.rbapp/services/questionnaire_package_export_service.rbapp/services/questionnaire_package_import_service.rbapp/services/questionnaire_package_template_service.rbapp/services/skip_record_action.rbapp/services/update_existing_record_action.rbconfig/database.ymlconfig/routes.rbdb/migrate/20250321222753_rename_sign_up_topic_to_project_topic_in_signed_up_teams.rbdb/migrate/20250418004442_change_to_polymorphic_association_in_teams.rbdb/migrate/20250629190818_add_comment_for_submission_to_team.rbdb/migrate/20251022160053_change_invitation_from_id_foreign_key.rbdb/migrate/20251126161701_remove_participant_ref_from_invitations.rbdb/migrate/20251129040855_rename_item_id_in_question_tables.rbdb/migrate/20260313064334_add_submission_fields_to_teams.rbdb/migrate/20260328170000_rename_sign_up_topics_to_project_topics.rbdb/migrate/20260424000000_add_comment_for_submission_to_teams_if_missing.rbdb/schema.rbdb/seeds.rbspec/helpers/import_export_spec.rbspec/integration/export_controller_spec.rbspec/integration/import_controller_spec.rbspec/models/team_import_export_spec.rbspec/requests/api/v1/export_spec.rbspec/requests/api/v1/grades_controller_spec.rbspec/requests/api/v1/import_spec.rbspec/requests/api/v1/questionnaires_controller_spec.rbspec/requests/import_export_requests_spec.rbspec/requests/questionnaire_packages_spec.rb
| # Load the chosen duplicate action (Skip, Update, Change) | ||
| dup_action = params[:dup_action]&.constantize |
There was a problem hiding this comment.
Stop constantizing dup_action from request data.
params[:dup_action]&.constantize lets the client choose an arbitrary constant and then instantiate it in the import path. Restrict this to klass.available_actions_on_duplicate and instantiate only the matched allowlisted strategy.
🧰 Tools
🪛 Brakeman (8.0.3)
[high] 68-68: Unsafe reflection method constantize called on parameter value
Type: Remote Code Execution
Confidence: High
More info: https://brakemanscanner.org/docs/warning_types/remote_code_execution/
(Remote Code Execution)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/import_controller.rb` around lines 67 - 68, Do not
constantize params[:dup_action]; instead validate it against the allowlist
returned by klass.available_actions_on_duplicate and only instantiate the
matching strategy. Replace the constantize usage in dup_action with a lookup
that finds an entry in klass.available_actions_on_duplicate whose name or
identifier matches params[:dup_action] (or use a safe mapping), and instantiate
that allowlisted class/strategy; if no match is found, fall back to a safe
default or raise a controlled error. Ensure you reference dup_action,
params[:dup_action], and klass.available_actions_on_duplicate when locating and
implementing the change.
| def try_import_records(file, headers, use_header, defaults = {}) | ||
| temp_file = 'output.csv' | ||
| csv_file = CSV.read(file) | ||
|
|
||
| mapping = [] | ||
|
|
||
| # ---- Normalize header row ---- | ||
| CSV.open(temp_file, "w") do |csv| | ||
| if use_header | ||
| headers = csv_file.shift.map { |h| h.parameterize.underscore } | ||
| else | ||
| headers = headers.map { |header| header.parameterize.underscore } | ||
| end | ||
|
|
||
| mapping = FieldMapping.from_header(self, headers) | ||
|
|
||
| csv << headers | ||
| csv_file.each { |row| csv << row } | ||
| end | ||
|
|
||
| temp_contents = CSV.read(temp_file) | ||
| temp_contents.shift # drop header | ||
|
|
||
| duplicate_records = [] | ||
|
|
||
| ActiveRecord::Base.transaction do | ||
| temp_contents.each do |row| | ||
| dup = import_row(row, mapping, defaults) | ||
| duplicate_records << dup if dup && dup != true | ||
| end | ||
|
|
||
| rescue StandardError => e | ||
| puts e.message | ||
| raise ActiveRecord::Rollback | ||
| end | ||
|
|
||
| File.delete(temp_file) | ||
| duplicate_records | ||
| end |
There was a problem hiding this comment.
Replace the shared output.csv with a real tempfile.
Every import writes the same filename in the app working directory. Two requests running at once can overwrite each other’s normalized CSV, delete the other request’s file, or mix rows across users.
Suggested fix
def try_import_records(file, headers, use_header, defaults = {})
- temp_file = 'output.csv'
csv_file = CSV.read(file)
-
- mapping = []
-
- # ---- Normalize header row ----
- CSV.open(temp_file, "w") do |csv|
- if use_header
- headers = csv_file.shift.map { |h| h.parameterize.underscore }
- else
- headers = headers.map { |header| header.parameterize.underscore }
- end
-
- mapping = FieldMapping.from_header(self, headers)
-
- csv << headers
- csv_file.each { |row| csv << row }
- end
-
- temp_contents = CSV.read(temp_file)
- temp_contents.shift # drop header
-
- duplicate_records = []
-
- ActiveRecord::Base.transaction do
- temp_contents.each do |row|
- dup = import_row(row, mapping, defaults)
- duplicate_records << dup if dup && dup != true
- end
-
- rescue StandardError => e
- puts e.message
- raise ActiveRecord::Rollback
+ mapping = []
+ duplicate_records = []
+
+ Tempfile.create(['import', '.csv']) do |temp_file|
+ CSV.open(temp_file.path, "w") do |csv|
+ headers =
+ if use_header
+ csv_file.shift.map { |h| h.parameterize.underscore }
+ else
+ headers.map { |header| header.parameterize.underscore }
+ end
+
+ mapping = FieldMapping.from_header(self, headers)
+ csv << headers
+ csv_file.each { |row| csv << row }
+ end
+
+ temp_contents = CSV.read(temp_file.path)
+ temp_contents.shift
+
+ ActiveRecord::Base.transaction do
+ temp_contents.each do |row|
+ dup = import_row(row, mapping, defaults)
+ duplicate_records << dup if dup && dup != true
+ end
+
+ rescue StandardError => e
+ puts e.message
+ raise ActiveRecord::Rollback
+ end
end
-
- File.delete(temp_file)
duplicate_records
end🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 275-275: Expected 1 empty line between method definitions; found 2.
(Layout/EmptyLineBetweenDefs)
[convention] 275-313: Assignment Branch Condition size for try_import_records is too high. [<15, 23, 10> 29.22/20]
(Metrics/AbcSize)
[convention] 275-313: Cyclomatic complexity for try_import_records is too high. [9/7]
(Metrics/CyclomaticComplexity)
[convention] 275-313: Method has too many lines. [27/20]
(Metrics/MethodLength)
[convention] 275-313: Perceived complexity for try_import_records is too high. [10/8]
(Metrics/PerceivedComplexity)
[convention] 283-287: Use the return of the conditional for variable assignment and comparison.
(Style/ConditionalAssignment)
[convention] 296-296: Unnecessary spacing detected.
(Layout/ExtraSpacing)
[convention] 305-306: Extra empty line detected before the rescue.
(Layout/EmptyLinesAroundExceptionHandlingKeywords)
[convention] 306-306: rescue at 306, 6 is not aligned with ActiveRecord::Base.transaction do at 300, 4.
(Layout/RescueEnsureAlignment)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/helpers/importable_exportable_helper.rb` around lines 275 - 313, The
method try_import_records currently uses a fixed filename 'output.csv' causing
race conditions; replace that with a Tempfile (e.g., Tempfile.new or
Tempfile.create) and use its path (temp_file.path) when reading/writing CSV in
CSV.open and CSV.read, ensure you close and unlink the Tempfile in an ensure
block instead of File.delete(temp_file), and adjust any code that treats
temp_file as a String (temp_contents = CSV.read(temp_file) ->
CSV.read(temp_file.path)) so CSV operations and cleanup are safe for concurrent
requests.
| def on_duplicate_record(klass, records) | ||
| merged = {} | ||
|
|
||
| existing = records[:existing] | ||
|
|
||
| klass.mandatory_fields.each do |field| | ||
| value = {} | ||
| value[field] = records[:incoming].send(field) | ||
| existing.send(:assign_attributes, value) | ||
| end |
There was a problem hiding this comment.
Nil values are currently overwriting existing data.
The method comment says updates should skip nil incoming values, but the current assignment always writes them.
🔧 Proposed fix
def on_duplicate_record(klass, records)
- merged = {}
-
existing = records[:existing]
klass.mandatory_fields.each do |field|
- value = {}
- value[field] = records[:incoming].send(field)
- existing.send(:assign_attributes, value)
+ incoming_value = records[:incoming].send(field)
+ next if incoming_value.nil?
+
+ existing.send(:assign_attributes, field => incoming_value)
end
existing
end🧰 Tools
🪛 RuboCop (1.86.1)
[warning] 25-25: Useless assignment to variable - merged.
(Lint/UselessAssignment)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/update_existing_record_action.rb` around lines 24 - 33,
on_duplicate_record is overwriting existing attributes with nils because you
always build value[field] = records[:incoming].send(field) and then
assign_attributes; instead, only set keys whose incoming value is non-nil (e.g.
check incoming_val = records[:incoming].send(field) and only add field =>
incoming_val to the value hash if incoming_val != nil) so klass.mandatory_fields
iteration skips nils before calling existing.send(:assign_attributes, value).
| username: root | ||
| password: expertiza |
There was a problem hiding this comment.
Do not commit shared database credentials here.
These defaults are inherited by development, test, and production, so this PR checks a plaintext password into the repo and points every environment at the MySQL root account. Move credentials to env/secret management and use a least-privileged app user instead.
Suggested change
default: &default
adapter: mysql2
encoding: utf8mb4
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
port: 3306
- username: root
- password: expertiza
+ username: <%= ENV.fetch("DB_USERNAME") %>
+ password: <%= ENV.fetch("DB_PASSWORD") %>
socket: /var/run/mysqld/mysqld.sock📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| username: root | |
| password: expertiza | |
| username: <%= ENV.fetch("DB_USERNAME") %> | |
| password: <%= ENV.fetch("DB_PASSWORD") %> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/database.yml` around lines 6 - 7, The config file currently hardcodes
shared DB credentials via the username and password entries (affecting
development/test/production); remove these plaintext values and instead load
credentials from environment/secret management (e.g. use environment variables
or a secrets provider for DB username/password) and configure the app to use a
least-privileged app-specific DB user rather than root; update the inherited
environment blocks (development, test, production) to reference the
secret-backed credentials and ensure any deployment/config templates set those
secrets rather than committing them here.
| if column_exists?(:teams, :assignment_id) | ||
| remove_reference :teams, :assignment, foreign_key: true | ||
| end | ||
|
|
||
| # Add polymorphic association fields (type column already exists) | ||
| add_column :teams, :parent_id, :integer, null: false | ||
| unless column_exists?(:teams, :parent_id) | ||
| add_column :teams, :parent_id, :integer, null: false | ||
| end |
There was a problem hiding this comment.
Migration order can fail and/or drop association data.
parent_id is introduced as null: false without backfill, and assignment_id is removed first. On populated tables this is risky (migration failure or lost mapping).
💡 Safer migration sequence
- if column_exists?(:teams, :assignment_id)
- remove_reference :teams, :assignment, foreign_key: true
- end
-
- unless column_exists?(:teams, :parent_id)
- add_column :teams, :parent_id, :integer, null: false
- end
+ add_column :teams, :parent_id, :integer unless column_exists?(:teams, :parent_id)
+
+ if column_exists?(:teams, :assignment_id)
+ execute <<~SQL
+ UPDATE teams
+ SET parent_id = assignment_id
+ WHERE parent_id IS NULL
+ SQL
+ remove_reference :teams, :assignment, foreign_key: true
+ end
+
+ change_column_null :teams, :parent_id, false if column_exists?(:teams, :parent_id)🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 6-6: Favor modifier if usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
(Style/IfUnlessModifier)
[convention] 11-11: Use a guard clause (return if column_exists?(:teams, :parent_id)) instead of wrapping the code inside a conditional expression.
(Style/GuardClause)
[convention] 11-11: Favor modifier unless usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
(Style/IfUnlessModifier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/migrate/20250418004442_change_to_polymorphic_association_in_teams.rb`
around lines 6 - 13, The migration currently removes :assignment_id before
backfilling the new polymorphic fields and creates :parent_id as null: false
which will fail on populated rows; instead, first add :parent_id as nullable
(use add_column :teams, :parent_id, :integer, null: true), backfill it from
assignment_id and set the polymorphic type (set parent_type = 'Assignment' and
parent_id = assignment_id) using an UPDATE or Team.reset_column_information +
update_all, then change_column_null :teams, :parent_id, false to enforce NOT
NULL, and only after backfill remove_reference :teams, :assignment, foreign_key:
true; use column_exists? guards around add_column and remove_reference as
already present.
| unless column_exists?(:teams, :name) | ||
| add_column :teams, :comment_for_submission, :string |
There was a problem hiding this comment.
Wrong guard column prevents this migration from adding comment_for_submission.
Line 3 checks :name instead of :comment_for_submission, so this migration will usually no-op.
💡 Proposed fix
- unless column_exists?(:teams, :name)
+ unless column_exists?(:teams, :comment_for_submission)
add_column :teams, :comment_for_submission, :string
end🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 3-3: Use a guard clause (return if column_exists?(:teams, :name)) instead of wrapping the code inside a conditional expression.
(Style/GuardClause)
[convention] 3-3: Favor modifier unless usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
(Style/IfUnlessModifier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/migrate/20250629190818_add_comment_for_submission_to_team.rb` around lines
3 - 4, The guard currently checks column_exists?(:teams, :name) which prevents
the add_column :teams, :comment_for_submission, :string from running; update the
guard to check for :comment_for_submission (i.e., replace the :name symbol with
:comment_for_submission) so the migration will add the comment_for_submission
column when it is missing (or remove the guard altogether if you prefer
unconditional add_column).
| change_column :invitations, :from_id, :bigint | ||
|
|
||
| # Remove old foreign key to participants | ||
| if foreign_key_exists?(:invitations, column: :from_id) | ||
| remove_foreign_key :invitations, column: :from_id | ||
| end | ||
| # Add new foreign key to teams | ||
| add_foreign_key :invitations, :teams, column: :from_id | ||
| unless foreign_key_exists?(:invitations, column: :from_id) | ||
| add_foreign_key :invitations, :teams, column: :from_id |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*change_invitation_from_id_foreign_key.rb" 2>/dev/nullRepository: expertiza/reimplementation-back-end
Length of output: 148
🏁 Script executed:
git ls-files | grep -i migration | head -20Repository: expertiza/reimplementation-back-end
Length of output: 61
🏁 Script executed:
cat -n ./db/migrate/20251022160053_change_invitation_from_id_foreign_key.rbRepository: expertiza/reimplementation-back-end
Length of output: 656
🌐 Web query:
Rails migration change_column foreign key constraint PostgreSQL type change order
💡 Result:
To change a column type in a Rails migration with PostgreSQL while handling foreign key constraints, follow this order: first remove the foreign key constraint(s) referencing the column, then change the column type using change_column (with USING clause if needed for casting), then re-add the foreign key constraint(s). Example migration: class ChangeColumnType < ActiveRecord::Migration[7.0] def change # Remove FKs referencing :posts on :comments table remove_foreign_key :comments, :posts if foreign_key_exists?(:comments, :posts) # Change type, e.g., string to integer with casting change_column :posts, :status_id, :integer, using: 'status_id::integer' # Re-add FK add_foreign_key :comments, :posts end end For simple nullability changes on FK columns (change_column_null), no special order is needed beyond ensuring no NULL violations if setting null: false—PostgreSQL validates existing data but FK constraints remain valid as they reference types, not nullability. If changing the referenced column (e.g., posts.status_id), update all dependent FK columns first or drop/re-add constraints as above. This prevents PostgreSQL errors like "foreign key constraint cannot be implemented" due to type mismatches (sources 10,15,19,24,64,66).
Citations:
- 1: ActiveRecord::Migration rename_column should rename the foreign key constraint as well rails/rails#42259
- 2: https://radanskoric.com/articles/zero-downtime-migration
- 3: https://stackoverflow.com/questions/10753409/how-to-change-column-type-and-all-foreign-keys-in-postgresql
- 4: https://stackoverflow.com/questions/67832841/rails-changing-primary-key-foreign-key-data-type
- 5: https://api.rubyonrails.org/v6.0/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html
- 6: https://www.postgresql.org/docs/current/sql-altertable.htm
- 7: https://stackoverflow.com/questions/10343383/rails-migrations-tried-to-change-the-type-of-column-from-string-to-integer/10343444
- 8: https://guides.rubyonrails.org/migrations.html
- 9: https://guides.rubyonrails.org/v7.1.0/active_record_migrations.html
- 10: https://stackoverflow.com/questions/5966840/how-to-change-a-nullable-column-to-not-nullable-in-a-rails-migration
- 11: https://stackoverflow.com/questions/8954804/change-column-null-for-existing-column
🌐 Web query:
Rails 7.0 remove_foreign_key add_foreign_key if foreign_key_exists syntax validity
💡 Result:
No, the syntax remove_foreign_key ... if foreign_key_exists? ...; add_foreign_key ... is not valid in Rails 7.0 migrations. remove_foreign_key and add_foreign_key are methods called directly on the migration class (via method_missing to the connection), and they do not accept block syntax or conditional if statements as modifiers like that. Attempting remove_foreign_key :table, :ref_table if foreign_key_exists?(:table, :ref_table) works only if placed in up/down methods, but not in a reversible change block, as change executes statements sequentially without conditionals. The correct, valid approach in Rails 7.0+ uses the built-in options: - remove_foreign_key :from_table, :to_table, if_exists: true (skips if FK does not exist) - add_foreign_key :from_table, :to_table, if_not_exists: true (skips if FK already exists) These options leverage the existing foreign_key_exists? method internally (as shown in Rails source code). For a reversible migration in change: def change remove_foreign_key :orders, :products, if_exists: true add_foreign_key :orders, :users, if_not_exists: true end foreign_key_exists?(:from_table, :to_table) is a valid method to check existence manually if needed outside migrations (e.g., in console or rake tasks).
Citations:
- 1: https://blog.saeloun.com/2021/07/14/rails-7-if-exists-if-not-exists-on-remove-add-foreign-key/
- 2: https://stackoverflow.com/questions/40467851/how-do-i-drop-a-foreign-key-if-it-exists-in-ruby-on-rails
- 3: https://apidock.com/rails/v7.0.0/ActiveRecord/ConnectionAdapters/SchemaStatements/remove_foreign_key
- 4: https://apidock.com/rails/v7.0.0/ActiveRecord/ConnectionAdapters/SchemaStatements/add_foreign_key
- 5: https://api.rubyonrails.org/v7.0.0/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html
Fix the foreign key ordering and use Rails 7.0 native options.
The migration will fail because change_column runs before FK removal—PostgreSQL prevents type changes on FK-constrained columns. Additionally, the conditional syntax with if/unless modifiers is invalid in change blocks under Rails 7.0; use the built-in if_exists: true and if_not_exists: true options instead.
Correct order:
- Remove foreign key
- Change column type
- Re-add foreign key
Use proper Rails 7.0 syntax:
🔧 Proposed fix
def change
+ # Remove old foreign key first so type change is safe.
+ remove_foreign_key :invitations, :from_id, if_exists: true
+
change_column :invitations, :from_id, :bigint
-
- # Remove old foreign key to participants
- if foreign_key_exists?(:invitations, column: :from_id)
- remove_foreign_key :invitations, column: :from_id
- end
+
# Add new foreign key to teams
- unless foreign_key_exists?(:invitations, column: :from_id)
- add_foreign_key :invitations, :teams, column: :from_id
- end
+ add_foreign_key :invitations, :teams, column: :from_id, if_not_exists: true
end🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 6-6: Favor modifier if usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
(Style/IfUnlessModifier)
[convention] 10-10: Use a guard clause (return if foreign_key_exists?(:invitations, column: :from_id)) instead of wrapping the code inside a conditional expression.
(Style/GuardClause)
[convention] 10-10: Favor modifier unless usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
(Style/IfUnlessModifier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/migrate/20251022160053_change_invitation_from_id_foreign_key.rb` around
lines 3 - 11, The migration currently calls change_column(:invitations,
:from_id, :bigint) before removing the existing FK and uses conditional
if/unless modifiers; reorder and use Rails 7 options: first call
remove_foreign_key :invitations, column: :from_id with if_exists: true, then
change_column :invitations, :from_id, :bigint, then add_foreign_key
:invitations, :teams, column: :from_id with if_not_exists: true (replace the
foreign_key_exists? checks and the conditional modifiers with these native
options).
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
app/controllers/grades_controller.rb (1)
45-57: Consider extracting export/CSV construction into a dedicated service.This controller is already large (static analysis flags class length), and adding export formatting logic here increases coupling. Moving export assembly to a service keeps controller responsibilities thinner.
Also applies to: 267-291
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/grades_controller.rb` around lines 45 - 57, Extract the CSV construction out of the GradesController by creating a dedicated service class (e.g., AssignmentGradesExporter or GradesExportService) that encapsulates grades_csv_for behavior and filename generation; move the logic that builds the CSV and uses include_email_in_grades_export? into that service, expose a simple API like .to_csv(assignment, include_email:), and update the export action to call the service and send_data the returned CSV/filename. Also refactor the other export usage referenced around the other block (lines 267-291) to use the same service, and update or add unit tests for the new service while removing CSV-specific logic from the controller.app/controllers/export_controller.rb (1)
42-64: Context dispatch is duplicated across export and metadata paths.Both methods repeat class-based branching for context wrappers. Extract a shared helper (e.g.,
with_export_context(klass) { ... }) to cut complexity and reduce drift.Also applies to: 81-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/export_controller.rb` around lines 42 - 64, Duplicate class-based context dispatch appears in export_controller for CSV and metadata paths; extract a helper method (e.g., with_export_context(klass, params, current_user) { |blk| ... }) that centralizes wrapping logic for Team.with_assignment_context, AssignmentParticipant.with_assignment_context, CourseParticipant.with_course_context, ProjectTopic.with_assignment_context, and the default Export.perform path, and call Export.perform or named_export_files inside that block; ensure the helper accepts a block so existing calls to Export.perform(klass, ordered_fields) and named_export_files(Export.perform(...), assignment_participant_export_name(...)/course_participant_export_name(...)) can be invoked within the same context wrapper to remove the duplicated branching used in the current csv_file and metadata generation code.app/services/export.rb (1)
28-43: Repeated field filtering inside the record loop is avoidable.Lines 28 and 38 repeatedly scan
mapping.ordered_fields/external_class.fieldsfor every record. Precompute field routing once before iterating records to reduce per-row overhead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/export.rb` around lines 28 - 43, Precompute the field selections and external-class field mappings before iterating records to avoid repeated scanning of mapping.ordered_fields and external_class.fields per row: compute class_fields = mapping.ordered_fields.select { |ele| export_class.internal_fields.include?(ele) } (already present) and build an array like external_mappings = Array(export_class.external_classes).map { |ext| [ext, mapping.ordered_fields.select { |ele| ext.fields.include?(ele) }, ext.ref_class.name.underscore] } then iterate export_class.filter.call and for each record use the precomputed external_mappings and ExternalClass.unappended_class_name to append external values; update references to ExternalClass.unappended_class_name(external_class.ref_class, f) to use the precomputed mapping so no selects happen inside the record loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/controllers/export_controller.rb`:
- Around line 25-27: The rescue blocks that currently do `rescue StandardError
=> e` and `render json: { error: e.message }, status: :unprocessable_entity`
should not return raw exception messages to clients; update these rescue
handlers (both occurrences of `rescue StandardError => e` in ExportController)
to render a generic error JSON (e.g., `{ error: "An error occurred" }`) with the
same status, and instead log the full exception and backtrace server-side using
Rails.logger.error or logger.error (include e.class, e.message, and
e.backtrace.join("\n")). Ensure both the rescue at the top (lines shown around
the first diff) and the second rescue (around lines 71-73) are changed
consistently.
- Around line 20-24: The export endpoint currently allows branches for scoped
classes to run with blank IDs; update the index flow to validate required scope
params before calling export_metadata_for: after resolving klass via
resolve_export_class(params[:class]) add explicit presence checks that raise
ArgumentError when missing — require params[:assignment_id] for klasses Team,
AssignmentParticipant, and ProjectTopic, and require params[:course_id] for
klass CourseParticipant; ensure similar validations are added to the other
export entry points/branches mentioned (lines covering the other ranges) so the
same classes cannot proceed without their required IDs.
- Around line 31-37: After parsing params[:ordered_fields] with JSON.parse into
ordered_fields, ensure the result is an Array before continuing: if
ordered_fields is nil skip, but if present and not an Array (use
ordered_fields.is_a?(Array)), render json: { error: "ordered_fields must be a
JSON array" }, status: :unprocessable_entity and return; adjust the rescue block
around JSON.parse (the existing rescue JSON::ParserError) to also validate type
after parse so only a proper Array flows into the export mapping logic.
In `@app/controllers/grades_controller.rb`:
- Around line 277-284: The CSV export writes user-controlled fields
(participant.user_name, participant.grade or team&.grade_for_submission,
team&.comment_for_submission, and participant.user&.email) directly into CSV
rows, which risks spreadsheet formula injection; fix by sanitizing each cell
before appending to row: add a helper (e.g., sanitize_csv_cell) used where the
row is constructed to return nil for nil values, leave non-string values
untouched, and for strings that begin with "=", "+", "-", or "@" prefix the
string with a single quote (') or otherwise escape the leading char; apply this
sanitizer to participant.user_name, the grade/comment values, and
participant.user&.email before csv << row.
In `@app/services/export.rb`:
- Around line 31-45: The CSV header/value misalignment occurs because
ordered_headers is written as a single ordered list but rows are built by
concatenating all internal class_fields first then each external_class's fields;
instead iterate mapping.ordered_fields (which corresponds to ordered_headers) to
build each row in that exact order: for each field in mapping.ordered_fields, if
the field exists in class_fields call record.send(field), otherwise find the
external_class that includes the field, fetch
record.send(external_class.ref_class.name.underscore) and call
ExternalClass.unappended_class_name(external_class.ref_class, field) on that
found_record (safe-nil checks as currently used) so each CSV row cell aligns
with ordered_headers.
---
Nitpick comments:
In `@app/controllers/export_controller.rb`:
- Around line 42-64: Duplicate class-based context dispatch appears in
export_controller for CSV and metadata paths; extract a helper method (e.g.,
with_export_context(klass, params, current_user) { |blk| ... }) that centralizes
wrapping logic for Team.with_assignment_context,
AssignmentParticipant.with_assignment_context,
CourseParticipant.with_course_context, ProjectTopic.with_assignment_context, and
the default Export.perform path, and call Export.perform or named_export_files
inside that block; ensure the helper accepts a block so existing calls to
Export.perform(klass, ordered_fields) and
named_export_files(Export.perform(...),
assignment_participant_export_name(...)/course_participant_export_name(...)) can
be invoked within the same context wrapper to remove the duplicated branching
used in the current csv_file and metadata generation code.
In `@app/controllers/grades_controller.rb`:
- Around line 45-57: Extract the CSV construction out of the GradesController by
creating a dedicated service class (e.g., AssignmentGradesExporter or
GradesExportService) that encapsulates grades_csv_for behavior and filename
generation; move the logic that builds the CSV and uses
include_email_in_grades_export? into that service, expose a simple API like
.to_csv(assignment, include_email:), and update the export action to call the
service and send_data the returned CSV/filename. Also refactor the other export
usage referenced around the other block (lines 267-291) to use the same service,
and update or add unit tests for the new service while removing CSV-specific
logic from the controller.
In `@app/services/export.rb`:
- Around line 28-43: Precompute the field selections and external-class field
mappings before iterating records to avoid repeated scanning of
mapping.ordered_fields and external_class.fields per row: compute class_fields =
mapping.ordered_fields.select { |ele| export_class.internal_fields.include?(ele)
} (already present) and build an array like external_mappings =
Array(export_class.external_classes).map { |ext| [ext,
mapping.ordered_fields.select { |ele| ext.fields.include?(ele) },
ext.ref_class.name.underscore] } then iterate export_class.filter.call and for
each record use the precomputed external_mappings and
ExternalClass.unappended_class_name to append external values; update references
to ExternalClass.unappended_class_name(external_class.ref_class, f) to use the
precomputed mapping so no selects happen inside the record loop.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 665b6f55-68d9-446b-be80-5839ba2f559c
📒 Files selected for processing (3)
app/controllers/export_controller.rbapp/controllers/grades_controller.rbapp/services/export.rb
| def index | ||
| klass = resolve_export_class(params[:class]) | ||
| raise ArgumentError, "Unsupported export class: #{params[:class]}" if klass.nil? | ||
|
|
||
| render json: export_metadata_for(klass), status: :ok |
There was a problem hiding this comment.
Missing required scope IDs can lead to unintended export scope.
For Team, AssignmentParticipant, and ProjectTopic, assignment_id should be required; for CourseParticipant, course_id should be required. Right now these branches execute even when IDs are blank.
Suggested fix
def index
klass = resolve_export_class(params[:class])
raise ArgumentError, "Unsupported export class: #{params[:class]}" if klass.nil?
+ validate_scope_params!(klass)
render json: export_metadata_for(klass), status: :ok
rescue StandardError => e
render json: { error: e.message }, status: :unprocessable_entity
end
@@
def export
@@
klass = resolve_export_class(params[:class])
raise ArgumentError, "Unsupported export class: #{params[:class]}" if klass.nil?
+ validate_scope_params!(klass)
@@
private
+
+ def validate_scope_params!(klass)
+ if [Team, AssignmentParticipant, ProjectTopic].include?(klass) && params[:assignment_id].blank?
+ raise ArgumentError, "assignment_id is required for #{klass.name}"
+ end
+ if klass == CourseParticipant && params[:course_id].blank?
+ raise ArgumentError, "course_id is required for CourseParticipant"
+ end
+ endAlso applies to: 39-64, 84-122
🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 23-23: Trailing whitespace detected.
(Layout/TrailingWhitespace)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/export_controller.rb` around lines 20 - 24, The export
endpoint currently allows branches for scoped classes to run with blank IDs;
update the index flow to validate required scope params before calling
export_metadata_for: after resolving klass via
resolve_export_class(params[:class]) add explicit presence checks that raise
ArgumentError when missing — require params[:assignment_id] for klasses Team,
AssignmentParticipant, and ProjectTopic, and require params[:course_id] for
klass CourseParticipant; ensure similar validations are added to the other
export entry points/branches mentioned (lines covering the other ranges) so the
same classes cannot proceed without their required IDs.
| rescue StandardError => e | ||
| render json: { error: e.message }, status: :unprocessable_entity | ||
| end |
There was a problem hiding this comment.
Raw exception messages are returned to clients.
These rescues expose internal error details via e.message. Return a generic message and log the exception server-side.
Also applies to: 71-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/export_controller.rb` around lines 25 - 27, The rescue blocks
that currently do `rescue StandardError => e` and `render json: { error:
e.message }, status: :unprocessable_entity` should not return raw exception
messages to clients; update these rescue handlers (both occurrences of `rescue
StandardError => e` in ExportController) to render a generic error JSON (e.g.,
`{ error: "An error occurred" }`) with the same status, and instead log the full
exception and backtrace server-side using Rails.logger.error or logger.error
(include e.class, e.message, and e.backtrace.join("\n")). Ensure both the rescue
at the top (lines shown around the first diff) and the second rescue (around
lines 71-73) are changed consistently.
| ordered_fields = | ||
| begin | ||
| JSON.parse(params[:ordered_fields]) if params[:ordered_fields] | ||
| rescue JSON::ParserError | ||
| render json: { error: "Invalid JSON for ordered_fields" }, status: :unprocessable_entity | ||
| return | ||
| end |
There was a problem hiding this comment.
Validate ordered_fields type after JSON parse.
Line 33 accepts any JSON value. Non-array input (object/string/number) can flow into export mapping and produce invalid headers/rows or runtime errors.
Suggested fix
ordered_fields =
begin
JSON.parse(params[:ordered_fields]) if params[:ordered_fields]
rescue JSON::ParserError
render json: { error: "Invalid JSON for ordered_fields" }, status: :unprocessable_entity
return
end
+
+ if ordered_fields && (!ordered_fields.is_a?(Array) || ordered_fields.any? { |f| !f.is_a?(String) })
+ render json: { error: "ordered_fields must be a JSON array of strings" }, status: :unprocessable_entity
+ return
+ end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/export_controller.rb` around lines 31 - 37, After parsing
params[:ordered_fields] with JSON.parse into ordered_fields, ensure the result
is an Array before continuing: if ordered_fields is nil skip, but if present and
not an Array (use ordered_fields.is_a?(Array)), render json: { error:
"ordered_fields must be a JSON array" }, status: :unprocessable_entity and
return; adjust the rescue block around JSON.parse (the existing rescue
JSON::ParserError) to also validate type after parse so only a proper Array
flows into the export mapping logic.
| row = [ | ||
| participant.user_name, | ||
| participant.grade || team&.grade_for_submission, | ||
| team&.comment_for_submission | ||
| ] | ||
| row << participant.user&.email if include_email | ||
|
|
||
| csv << row |
There was a problem hiding this comment.
Prevent CSV formula injection in export rows.
Line 278/280/282 write user-controlled text directly into CSV cells. Spreadsheet clients can evaluate cells starting with =, +, -, or @.
🔒 Proposed fix
def grades_csv_for(assignment, include_email: false)
headers = GRADES_EXPORT_HEADERS + (include_email ? GRADES_EXPORT_OPTIONAL_HEADERS : [])
CSV.generate(headers: true) do |csv|
csv << headers
assignment.participants.includes(:user).find_each do |participant|
team = participant.team
row = [
- participant.user_name,
+ sanitize_csv_cell(participant.user_name),
participant.grade || team&.grade_for_submission,
- team&.comment_for_submission
+ sanitize_csv_cell(team&.comment_for_submission)
]
- row << participant.user&.email if include_email
+ row << sanitize_csv_cell(participant.user&.email) if include_email
csv << row
end
end
end
+
+def sanitize_csv_cell(value)
+ text = value.to_s
+ text.match?(/\A[=\-+@]/) ? "'#{text}" : text
+end🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 278-278: Use 2 spaces for indentation in an array, relative to the start of the line where the left square bracket is.
(Layout/FirstArrayElementIndentation)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/grades_controller.rb` around lines 277 - 284, The CSV export
writes user-controlled fields (participant.user_name, participant.grade or
team&.grade_for_submission, team&.comment_for_submission, and
participant.user&.email) directly into CSV rows, which risks spreadsheet formula
injection; fix by sanitizing each cell before appending to row: add a helper
(e.g., sanitize_csv_cell) used where the row is constructed to return nil for
nil values, leave non-string values untouched, and for strings that begin with
"=", "+", "-", or "@" prefix the string with a single quote (') or otherwise
escape the leading char; apply this sanitizer to participant.user_name, the
grade/comment values, and participant.user&.email before csv << row.
| csv << ordered_headers | ||
|
|
||
| # Insert each scoped model record in the same selected field order. | ||
| export_class.filter.call.each do |record| | ||
| row = class_fields.map { |f| record.send(f) } | ||
|
|
||
| Array(export_class.external_classes).each do |external_class| | ||
| ext_class_fields = mapping.ordered_fields.select { |ele| external_class.fields.include?(ele) } | ||
| found_record = record.send(external_class.ref_class.name.underscore) | ||
| row += ext_class_fields.map do |f| | ||
| found_record&.send(ExternalClass.unappended_class_name(external_class.ref_class, f)) if f | ||
| end | ||
| end | ||
|
|
||
| csv << row |
There was a problem hiding this comment.
Header/value alignment can break for mixed field order.
Line 31 writes raw ordered_headers, but Lines 35-43 build rows as “all internal first, then external by class.” If frontend order is mixed (e.g., external/internal/external), CSV columns no longer match headers.
Suggested fix
- # Preserve the selected frontend field order in the CSV header.
- csv << ordered_headers
+ # Use validated/mapped header order to keep header/value alignment.
+ csv << mapping.ordered_fields
# Insert each scoped model record in the same selected field order.
export_class.filter.call.each do |record|
- row = class_fields.map { |f| record.send(f) }
-
- Array(export_class.external_classes).each do |external_class|
- ext_class_fields = mapping.ordered_fields.select { |ele| external_class.fields.include?(ele) }
- found_record = record.send(external_class.ref_class.name.underscore)
- row += ext_class_fields.map do |f|
- found_record&.send(ExternalClass.unappended_class_name(external_class.ref_class, f)) if f
- end
- end
+ row = mapping.ordered_fields.map do |field|
+ if export_class.internal_fields.include?(field)
+ record.send(field)
+ else
+ external_class = Array(export_class.external_classes).find { |ec| ec.fields.include?(field) }
+ next nil unless external_class
+
+ association_name = external_class.ref_class.name.underscore
+ found_record = record.respond_to?(association_name) ? record.send(association_name) : nil
+ found_record&.send(ExternalClass.unappended_class_name(external_class.ref_class, field))
+ end
+ end
csv << row
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/export.rb` around lines 31 - 45, The CSV header/value
misalignment occurs because ordered_headers is written as a single ordered list
but rows are built by concatenating all internal class_fields first then each
external_class's fields; instead iterate mapping.ordered_fields (which
corresponds to ordered_headers) to build each row in that exact order: for each
field in mapping.ordered_fields, if the field exists in class_fields call
record.send(field), otherwise find the external_class that includes the field,
fetch record.send(external_class.ref_class.name.underscore) and call
ExternalClass.unappended_class_name(external_class.ref_class, field) on that
found_record (safe-nil checks as currently used) so each CSV row cell aligns
with ordered_headers.
…ent ID as mandatory for topics import and export
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/models/team.rb`:
- Around line 24-30: method_missing currently converts any non-numeric suffix
into -1 via to_i - 1, which can return the last participant for inputs like
"participant_0" or "participant_foo"; change the parsing in method_missing to
only accept strictly positive numeric suffixes by extracting the suffix with
delete_prefix(TEAM_PARTICIPANT_COLUMN_PREFIX), validating it matches /^\d+$/ and
converting to integer only on match (then subtract 1 and check bounds) before
returning participants[index]&.user&.name; apply the same numeric-only
validation and bounds check to the corresponding code at lines 33-35 (the other
dynamic participant accessor).
- Around line 249-253: The CSV-to-field alignment is broken when unknown/ignored
headers exist because rows are currently built from csv_table.map(&:fields) and
later zipped with mapping.ordered_fields in import_team_row; update the
row-building logic so that when use_header is true you first map CSV headers to
mapping.ordered_fields (filtering and reordering fields to exactly match
mapping.ordered_fields) and then build each row as an array of values in that
mapped order before calling import_team_row; also handle the no-header case by
ensuring rows are padded/truncated to the length of mapping.ordered_fields so
the existing zip in import_team_row or wherever
mapping.ordered_fields.zip(row_values) always matches the intended keys. Ensure
you reference csv_table, use_header, mapping.ordered_fields, and import_team_row
when making the changes.
- Around line 232-234: The export currently triggers an extra DB query per team
by calling team.participants.order(:id); instead, use the already-preloaded
association to avoid N+1s — replace that call with a non-querying in-memory sort
like team.participants.to_a.sort_by(&:id) (or alter export_scope to include
ordered participants via includes(participants: -> { order(:id)
}).includes(participants: :user)) and pass the resulting array into
TeamExportRow.new(team, ...).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: fd0ee972-aea2-43ea-a0e0-d44e2b35ba30
📒 Files selected for processing (4)
app/models/project_topic.rbapp/models/team.rbspec/models/team_import_export_spec.rbspec/requests/import_export_requests_spec.rb
| def method_missing(method_name, *_args) | ||
| method = method_name.to_s | ||
| return super unless method.start_with?(TEAM_PARTICIPANT_COLUMN_PREFIX) | ||
|
|
||
| index = method.delete_prefix(TEAM_PARTICIPANT_COLUMN_PREFIX).to_i - 1 | ||
| participants[index]&.user&.name | ||
| end |
There was a problem hiding this comment.
participant_0 or non-numeric dynamic fields can return the wrong participant.
Right now, to_i - 1 turns invalid suffixes into -1, which can read the last participant unexpectedly. Tighten parsing to numeric-only positive suffixes.
Suggested fix
def method_missing(method_name, *_args)
method = method_name.to_s
- return super unless method.start_with?(TEAM_PARTICIPANT_COLUMN_PREFIX)
+ return super unless method.match?(/\A#{TEAM_PARTICIPANT_COLUMN_PREFIX}\d+\z/)
index = method.delete_prefix(TEAM_PARTICIPANT_COLUMN_PREFIX).to_i - 1
+ return nil if index.negative?
participants[index]&.user&.name
end
def respond_to_missing?(method_name, include_private = false)
- method_name.to_s.start_with?(TEAM_PARTICIPANT_COLUMN_PREFIX) || super
+ method_name.to_s.match?(/\A#{TEAM_PARTICIPANT_COLUMN_PREFIX}\d+\z/) || super
endAlso applies to: 33-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/team.rb` around lines 24 - 30, method_missing currently converts
any non-numeric suffix into -1 via to_i - 1, which can return the last
participant for inputs like "participant_0" or "participant_foo"; change the
parsing in method_missing to only accept strictly positive numeric suffixes by
extracting the suffix with delete_prefix(TEAM_PARTICIPANT_COLUMN_PREFIX),
validating it matches /^\d+$/ and converting to integer only on match (then
subtract 1 and check bounds) before returning participants[index]&.user&.name;
apply the same numeric-only validation and bounds check to the corresponding
code at lines 33-35 (the other dynamic participant accessor).
| export_scope.includes(participants: :user).map do |team| | ||
| TeamExportRow.new(team, team.participants.order(:id).to_a) | ||
| end |
There was a problem hiding this comment.
Export currently reintroduces an N+1 query on participants.
export_scope.includes(participants: :user) is good, but team.participants.order(:id) issues an extra query per team. This can degrade large exports.
Suggested fix
def export_rows
export_scope.includes(participants: :user).map do |team|
- TeamExportRow.new(team, team.participants.order(:id).to_a)
+ TeamExportRow.new(team, team.participants.sort_by(&:id))
end
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/team.rb` around lines 232 - 234, The export currently triggers an
extra DB query per team by calling team.participants.order(:id); instead, use
the already-preloaded association to avoid N+1s — replace that call with a
non-querying in-memory sort like team.participants.to_a.sort_by(&:id) (or alter
export_scope to include ordered participants via includes(participants: -> {
order(:id) }).includes(participants: :user)) and pass the resulting array into
TeamExportRow.new(team, ...).
| rows = use_header ? csv_table.map(&:fields) : csv_table | ||
|
|
||
| ActiveRecord::Base.transaction do | ||
| rows.each do |row| | ||
| import_team_row(row, mapping, defaults) |
There was a problem hiding this comment.
CSV value mapping can shift into wrong fields when unknown headers are present.
Line 249 builds row arrays, then Lines 273-275 zip mapping.ordered_fields with full row values. If a CSV has ignored columns (especially before participant_1), values can be assigned to the wrong keys.
Suggested fix
def try_import_records(file, headers, use_header, defaults = {})
csv_table = CSV.read(file, headers: use_header)
normalized_headers =
if use_header
csv_table.headers.map { |header| header.to_s.parameterize.underscore }
else
Array(headers).map { |header| header.to_s.parameterize.underscore }
- end
+ end
mapping = FieldMapping.from_header(self, normalized_headers)
validate_import_mapping!(mapping)
- rows = use_header ? csv_table.map(&:fields) : csv_table
+ rows =
+ if use_header
+ csv_table.map { |csv_row| normalized_headers.zip(csv_row.fields).to_h }
+ else
+ csv_table.map { |row| mapping.ordered_fields.zip(row).to_h }
+ end
ActiveRecord::Base.transaction do
rows.each do |row|
import_team_row(row, mapping, defaults)
end
end
@@
def import_team_row(row, mapping, defaults)
- row_hash = {}
- mapping.ordered_fields.zip(row).each do |key, value|
- row_hash[key] = value
- end
+ row_hash = row.slice(*mapping.ordered_fields)
validate_import_row!(row_hash)Also applies to: 271-275
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/team.rb` around lines 249 - 253, The CSV-to-field alignment is
broken when unknown/ignored headers exist because rows are currently built from
csv_table.map(&:fields) and later zipped with mapping.ordered_fields in
import_team_row; update the row-building logic so that when use_header is true
you first map CSV headers to mapping.ordered_fields (filtering and reordering
fields to exactly match mapping.ordered_fields) and then build each row as an
array of values in that mapped order before calling import_team_row; also handle
the no-header case by ensuring rows are padded/truncated to the length of
mapping.ordered_fields so the existing zip in import_team_row or wherever
mapping.ordered_fields.zip(row_values) always matches the intended keys. Ensure
you reference csv_table, use_header, mapping.ordered_fields, and import_team_row
when making the changes.
…on advices and update related metadata; adjust import/export specs to reflect changes in required fields.
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (4)
app/models/team.rb (3)
239-249:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep row values aligned with the filtered header mapping.
FieldMapping.from_headercan drop unknown CSV headers, butrows = csv_table.map(&:fields)still keeps every column andmapping.ordered_fields.zip(row)later assumes the row has already been filtered/reordered. Extra columns beforeparticipant_1will shift usernames into the wrong fields.Also applies to: 271-275
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/team.rb` around lines 239 - 249, The rows are not being filtered/reordered to match the header mapping returned by FieldMapping.from_header, so mapping.ordered_fields.zip(row) can pair wrong values; update the rows construction (where rows = csv_table.map(&:fields)) to map each csv_table row into an array of values selected and ordered according to the mapping (use mapping.ordered_fields or mapping.ordered_field_names/headers to drive selection), ensuring unknown columns are dropped and remaining columns are in the same order as mapping.ordered_fields; make the same change in the second occurrence around the 271-275 block so all imports use the mapping-aligned rows.
24-35:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winReject non-numeric
participant_accessors.
participant_0andparticipant_fooboth become index-1becauseto_ireturns0on invalid suffixes. That can expose the last participant instead of returningnil/super.Suggested fix
def method_missing(method_name, *_args) method = method_name.to_s - return super unless method.start_with?(TEAM_PARTICIPANT_COLUMN_PREFIX) + return super unless method.match?(/\A#{TEAM_PARTICIPANT_COLUMN_PREFIX}\d+\z/) index = method.delete_prefix(TEAM_PARTICIPANT_COLUMN_PREFIX).to_i - 1 + return nil if index.negative? participants[index]&.user&.name end def respond_to_missing?(method_name, include_private = false) - method_name.to_s.start_with?(TEAM_PARTICIPANT_COLUMN_PREFIX) || super + method_name.to_s.match?(/\A#{TEAM_PARTICIPANT_COLUMN_PREFIX}\d+\z/) || super end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/team.rb` around lines 24 - 35, method_missing currently converts any non-numeric suffix to 0 so calls like participant_0 or participant_foo map to index -1; change method_missing to only handle accessors whose suffix is a positive integer: extract the suffix with a regex (e.g. method_name.to_s.match(/^#{TEAM_PARTICIPANT_COLUMN_PREFIX}(\d+)$/)), return super unless there is a match and the captured number > 0, then compute index = captured_number.to_i - 1 and return participants[index]&.user&.name; likewise update respond_to_missing? to only return true when the method_name matches that same numeric-pattern check instead of any name starting with TEAM_PARTICIPANT_COLUMN_PREFIX.
231-234:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAvoid re-querying
participantsafter preloading them.
export_scope.includes(participants: :user)is already eager-loading the association, butteam.participants.order(:id)issues a fresh query per team. That turns large exports back into an N+1.Suggested fix
def export_rows export_scope.includes(participants: :user).map do |team| - TeamExportRow.new(team, team.participants.order(:id).to_a) + TeamExportRow.new(team, team.participants.to_a.sort_by(&:id)) end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/team.rb` around lines 231 - 234, The export_rows method is triggering N+1 queries by calling team.participants.order(:id) after export_scope.includes(participants: :user) preloaded participants; replace that re-query with the already-loaded association (e.g. use team.participants.to_a.sort_by(&:id) or team.participants.to_a.sort_by { |p| p.id }) when constructing TeamExportRow so you rely on the eager-loaded participants and preserve ID ordering without issuing new SQL per team.app/helpers/importable_exportable_helper.rb (1)
275-312:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winReplace the shared
output.csvwith a real tempfile.Every import writes the same filename in the app working directory. Concurrent requests can overwrite each other’s normalized CSV, read the wrong rows, or delete another request’s file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/helpers/importable_exportable_helper.rb` around lines 275 - 312, The method try_import_records uses a fixed filename 'output.csv' which causes race conditions; replace that with a Tempfile (e.g., via Tempfile.new or Tempfile.create) and use its path when opening/writing/reading the normalized CSV in try_import_records, ensure you close/flush the Tempfile before re-reading (use temp.path for CSV.read), and ensure it is unlinked/closed in an ensure block (or rely on Tempfile's automatic cleanup) instead of File.delete('output.csv').
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/controllers/questionnaire_packages_controller.rb`:
- Around line 38-39: In questionnaire_packages_controller.rb replace the broad
"rescue StandardError => e" handlers that render json: { error: e.message } with
targeted exception handling: catch specific domain/validation exceptions (e.g.,
ActiveRecord::RecordNotFound, ActiveRecord::RecordInvalid or your custom errors)
and map them to stable public messages, log the original exception and backtrace
server-side (using Rails.logger.error or similar) inside the rescue, and for any
remaining unexpected errors use a generic public message like "An internal error
occurred" while still logging details; update the rescue blocks located in the
controller actions (the existing rescue lines) to follow this pattern so clients
never receive raw exception strings.
- Around line 129-137: In parse_questionnaire_ids, ensure the JSON.parse(ids)
result is an Array before returning it: after calling JSON.parse(ids) (in the
parse_questionnaire_ids method) check that the parsed value is an Array and
return it only if so; otherwise return an empty array (or convert/filter values
to an array of IDs as appropriate) and keep the existing rescue
JSON::ParserError to return []. This guarantees parse_questionnaire_ids always
returns an array of IDs rather than objects/strings/numbers.
- Around line 109-118: The params permit list in questionnaire_package_params is
missing template_name, causing valid template requests to be treated as
unpermitted; update the params.permit call in questionnaire_packages_controller
(the method questionnaire_package_params) to include :template_name so
params[:template_name] is allowed alongside :package_file, :questionnaire_file,
etc.
In `@app/helpers/importable_exportable_helper.rb`:
- Around line 289-295: FieldMapping.from_header is dropping unknown headers but
the import loop still zips mapping.ordered_fields with the full CSV row which
shifts values when ignored columns exist; update the import path so rows are
filtered to only the columns that FieldMapping kept before zipping — e.g.,
derive the kept column indexes from FieldMapping.from_header (or from
mapping.headers / mapping.ordered_fields) and map each csv_file row to
row.values_at(*kept_indexes) (or rebuild csv rows using the filtered headers
written earlier) before zipping with mapping.ordered_fields in the import loop
and for the block around csv << headers / csv_file.each to ensure header/value
alignment.
- Around line 347-359: The second loop currently always calls
create_external_class even when look_up_external_class already attached an
existing record; update the logic so after the first pass
(external_classes&.each { |external_class| look_up_external_class(row_hash,
external_class, created_object) }) you only call create_external_class for
external_class entries whose association is still unset on created_object. In
practice, inside the external_classes.each loop replace the unconditional
create_external_class call with a guard that checks the corresponding
association on created_object (or a flag set by look_up_external_class) and only
invokes create_external_class(row_hash, external_class, created_object) when the
lookup did not resolve that association; keep the existing duplicate/save_object
checks unchanged.
In `@app/models/team.rb`:
- Line 46: The has_many :sent_invitations on Team is using foreign_key:
'from_id' which is wrong because Invitation#from_id points to
AssignmentParticipant, not Team; change Team's association to fetch invitations
through its participants (e.g. use has_many :participants or has_many
:assignment_participants then has_many :sent_invitations, through:
:assignment_participants, source: :sent_invitations) so Team.id is not matched
against participant IDs, and remove or relocate dependent: :destroy from this
incorrect association to the participant-level association if you need cascade
deletion; update references to the association name (sent_invitations)
accordingly.
In `@app/services/questionnaire_package_import_service.rb`:
- Around line 168-182: The validate_package! method currently only checks
package_type and version but must also verify that any files advertised by the
manifest actually exist in the ZIP entries; update validate_package! to inspect
the manifest for expected file names (e.g., any CSVs like "question_advices.csv"
or a manifest 'files' list) and ensure each advertised filename is present in
entries.keys, raising a StandardError (with a clear message like "Questionnaire
package missing advertised file: <name>") when a manifest-listed file is missing
so resolve_csv_sources/parse_csv never receive nil.
- Around line 415-470: The import path always constructs new child records
(Item.new in import_items and QuestionAdvice.new in import_question_advices)
which causes duplicates during update flows; modify import_items to first try to
find an existing Item (e.g., questionnaire.items.find_by(seq: row['seq'], txt:
row['txt']) or Item.find_by(questionnaire_id: questionnaire.id, seq: ..., txt:
...)) and then update its attributes (or use find_or_initialize_by) instead of
blindly creating a new one, preserving the seq handling logic and saved record;
likewise change import_question_advices to locate an existing QuestionAdvice for
the resolved item (e.g., item.question_advices.find_by(score: row['score'],
advice: row['advice']) or find_or_initialize_by(item_id: item.id, score: ...,
advice: ...)) and update/save rather than always calling QuestionAdvice.new so
re-imports update/replace child rows instead of appending duplicates.
---
Duplicate comments:
In `@app/helpers/importable_exportable_helper.rb`:
- Around line 275-312: The method try_import_records uses a fixed filename
'output.csv' which causes race conditions; replace that with a Tempfile (e.g.,
via Tempfile.new or Tempfile.create) and use its path when
opening/writing/reading the normalized CSV in try_import_records, ensure you
close/flush the Tempfile before re-reading (use temp.path for CSV.read), and
ensure it is unlinked/closed in an ensure block (or rely on Tempfile's automatic
cleanup) instead of File.delete('output.csv').
In `@app/models/team.rb`:
- Around line 239-249: The rows are not being filtered/reordered to match the
header mapping returned by FieldMapping.from_header, so
mapping.ordered_fields.zip(row) can pair wrong values; update the rows
construction (where rows = csv_table.map(&:fields)) to map each csv_table row
into an array of values selected and ordered according to the mapping (use
mapping.ordered_fields or mapping.ordered_field_names/headers to drive
selection), ensuring unknown columns are dropped and remaining columns are in
the same order as mapping.ordered_fields; make the same change in the second
occurrence around the 271-275 block so all imports use the mapping-aligned rows.
- Around line 24-35: method_missing currently converts any non-numeric suffix to
0 so calls like participant_0 or participant_foo map to index -1; change
method_missing to only handle accessors whose suffix is a positive integer:
extract the suffix with a regex (e.g.
method_name.to_s.match(/^#{TEAM_PARTICIPANT_COLUMN_PREFIX}(\d+)$/)), return
super unless there is a match and the captured number > 0, then compute index =
captured_number.to_i - 1 and return participants[index]&.user&.name; likewise
update respond_to_missing? to only return true when the method_name matches that
same numeric-pattern check instead of any name starting with
TEAM_PARTICIPANT_COLUMN_PREFIX.
- Around line 231-234: The export_rows method is triggering N+1 queries by
calling team.participants.order(:id) after export_scope.includes(participants:
:user) preloaded participants; replace that re-query with the already-loaded
association (e.g. use team.participants.to_a.sort_by(&:id) or
team.participants.to_a.sort_by { |p| p.id }) when constructing TeamExportRow so
you rely on the eager-loaded participants and preserve ID ordering without
issuing new SQL per team.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: a66e00ee-615b-4e8d-9c35-57583ce07860
📒 Files selected for processing (9)
app/controllers/questionnaire_packages_controller.rbapp/helpers/importable_exportable_helper.rbapp/models/questionnaire.rbapp/models/team.rbapp/services/questionnaire_package_export_service.rbapp/services/questionnaire_package_import_service.rbapp/services/questionnaire_package_template_service.rbspec/requests/import_export_requests_spec.rbspec/requests/questionnaire_packages_spec.rb
| rescue StandardError => e | ||
| render json: { error: e.message }, status: :unprocessable_entity |
There was a problem hiding this comment.
Avoid returning raw exception messages to clients.
These rescues expose internal error strings via e.message and also catch all StandardError, which can mask unexpected bugs. Return a stable public message and log internals server-side; reserve detailed client errors for known validation/domain exceptions.
Suggested fix pattern
- rescue StandardError => e
- render json: { error: e.message }, status: :unprocessable_entity
+ rescue QuestionnairePackageImportService::ImportError => e
+ render json: { error: e.message }, status: :unprocessable_entity
+ rescue StandardError => e
+ Rails.logger.error("[QuestionnairePackagesController] #{e.class}: #{e.message}")
+ render json: { error: 'Unable to process questionnaire package request.' }, status: :unprocessable_entity
endAlso applies to: 64-65, 81-82, 97-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/questionnaire_packages_controller.rb` around lines 38 - 39,
In questionnaire_packages_controller.rb replace the broad "rescue StandardError
=> e" handlers that render json: { error: e.message } with targeted exception
handling: catch specific domain/validation exceptions (e.g.,
ActiveRecord::RecordNotFound, ActiveRecord::RecordInvalid or your custom errors)
and map them to stable public messages, log the original exception and backtrace
server-side (using Rails.logger.error or similar) inside the rescue, and for any
remaining unexpected errors use a generic public message like "An internal error
occurred" while still logging details; update the rescue blocks located in the
controller actions (the existing rescue lines) to follow this pattern so clients
never receive raw exception strings.
| params.permit( | ||
| :package_file, | ||
| :questionnaire_file, | ||
| :items_file, | ||
| :question_advices_file, | ||
| :dup_action, | ||
| :export_all, | ||
| :include_question_advices, | ||
| questionnaire_ids: [] | ||
| ) |
There was a problem hiding this comment.
Permit template_name in package params allowlist.
template reads params[:template_name], but questionnaire_package_params does not permit it. In stricter environments this can raise unpermitted-parameter errors for valid template requests.
Suggested fix
def questionnaire_package_params
params.permit(
+ :template_name,
:package_file,
:questionnaire_file,
:items_file,
:question_advices_file,
:dup_action,
:export_all,
:include_question_advices,
questionnaire_ids: []
)
end📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| params.permit( | |
| :package_file, | |
| :questionnaire_file, | |
| :items_file, | |
| :question_advices_file, | |
| :dup_action, | |
| :export_all, | |
| :include_question_advices, | |
| questionnaire_ids: [] | |
| ) | |
| params.permit( | |
| :template_name, | |
| :package_file, | |
| :questionnaire_file, | |
| :items_file, | |
| :question_advices_file, | |
| :dup_action, | |
| :export_all, | |
| :include_question_advices, | |
| questionnaire_ids: [] | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/questionnaire_packages_controller.rb` around lines 109 - 118,
The params permit list in questionnaire_package_params is missing template_name,
causing valid template requests to be treated as unpermitted; update the
params.permit call in questionnaire_packages_controller (the method
questionnaire_package_params) to include :template_name so
params[:template_name] is allowed alongside :package_file, :questionnaire_file,
etc.
| def parse_questionnaire_ids | ||
| ids = params[:questionnaire_ids] | ||
| return ids if ids.is_a?(Array) | ||
| return [] if ids.blank? | ||
|
|
||
| JSON.parse(ids) | ||
| rescue JSON::ParserError | ||
| [] | ||
| end |
There was a problem hiding this comment.
Validate parsed questionnaire_ids is an array of IDs.
JSON.parse(ids) can return non-array JSON (object/number/string). That can produce inconsistent query behavior downstream. Enforce array shape before returning.
Suggested fix
def parse_questionnaire_ids
ids = params[:questionnaire_ids]
return ids if ids.is_a?(Array)
return [] if ids.blank?
- JSON.parse(ids)
+ parsed = JSON.parse(ids)
+ return [] unless parsed.is_a?(Array)
+
+ parsed
rescue JSON::ParserError
[]
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/questionnaire_packages_controller.rb` around lines 129 - 137,
In parse_questionnaire_ids, ensure the JSON.parse(ids) result is an Array before
returning it: after calling JSON.parse(ids) (in the parse_questionnaire_ids
method) check that the parsed value is an Array and return it only if so;
otherwise return an empty array (or convert/filter values to an array of IDs as
appropriate) and keep the existing rescue JSON::ParserError to return []. This
guarantees parse_questionnaire_ids always returns an array of IDs rather than
objects/strings/numbers.
| mapping = FieldMapping.from_header(self, headers) | ||
|
|
||
| csv << headers | ||
| csv_file.each { |row| csv << row } | ||
| end | ||
|
|
||
| temp_contents = CSV.read(temp_file) |
There was a problem hiding this comment.
Preserve header/value alignment after FieldMapping drops unknown columns.
FieldMapping.from_header only keeps headers that match internal_and_external_fields, but the later import loop still zips mapping.ordered_fields with the full CSV row. If the upload contains an ignored column before a valid one, values shift into the wrong attributes silently.
Also applies to: 300-303
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/helpers/importable_exportable_helper.rb` around lines 289 - 295,
FieldMapping.from_header is dropping unknown headers but the import loop still
zips mapping.ordered_fields with the full CSV row which shifts values when
ignored columns exist; update the import path so rows are filtered to only the
columns that FieldMapping kept before zipping — e.g., derive the kept column
indexes from FieldMapping.from_header (or from mapping.headers /
mapping.ordered_fields) and map each csv_file row to
row.values_at(*kept_indexes) (or rebuild csv rows using the filtered headers
written earlier) before zipping with mapping.ordered_fields in the import loop
and for the block around csv << headers / csv_file.each to ensure header/value
alignment.
| # for each external class, try to look them up | ||
| external_classes&.each do |external_class| | ||
| look_up_external_class(row_hash, external_class, created_object) | ||
| end | ||
|
|
||
| duplicate = save_object(created_object) | ||
| return duplicate if duplicate && duplicate != true | ||
|
|
||
| return true if external_classes.empty? | ||
|
|
||
| external_classes.each do |external_class| | ||
| create_external_class(row_hash, external_class, created_object) | ||
| end |
There was a problem hiding this comment.
Skip external creation when lookup already resolved the association.
The first pass attaches an existing external record, but the second pass still calls create_external_class unconditionally. Any ExternalClass configured with both should_look_up and should_create will create a new external row even when lookup succeeded.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/helpers/importable_exportable_helper.rb` around lines 347 - 359, The
second loop currently always calls create_external_class even when
look_up_external_class already attached an existing record; update the logic so
after the first pass (external_classes&.each { |external_class|
look_up_external_class(row_hash, external_class, created_object) }) you only
call create_external_class for external_class entries whose association is still
unset on created_object. In practice, inside the external_classes.each loop
replace the unconditional create_external_class call with a guard that checks
the corresponding association on created_object (or a flag set by
look_up_external_class) and only invokes create_external_class(row_hash,
external_class, created_object) when the lookup did not resolve that
association; keep the existing duplicate/save_object checks unchanged.
| has_many :users, through: :teams_participants | ||
| has_many :participants, through: :teams_participants | ||
| has_many :join_team_requests, dependent: :destroy | ||
| has_many :sent_invitations, class_name: 'Invitation', foreign_key: 'from_id', dependent: :destroy |
There was a problem hiding this comment.
sent_invitations is wired to the wrong foreign key.
Invitation#from_id belongs to AssignmentParticipant, not Team (app/models/invitation.rb:1-8). With this association, team.sent_invitations matches Team.id against participant IDs, and dependent: :destroy can delete unrelated invitations whenever those IDs collide.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/team.rb` at line 46, The has_many :sent_invitations on Team is
using foreign_key: 'from_id' which is wrong because Invitation#from_id points to
AssignmentParticipant, not Team; change Team's association to fetch invitations
through its participants (e.g. use has_many :participants or has_many
:assignment_participants then has_many :sent_invitations, through:
:assignment_participants, source: :sent_invitations) so Team.id is not matched
against participant IDs, and remove or relocate dependent: :destroy from this
incorrect association to the participant-level association if you need cascade
deletion; update references to the association name (sent_invitations)
accordingly.
| def validate_package!(entries) | ||
| missing_files = REQUIRED_FILES - entries.keys | ||
| raise StandardError, "Questionnaire package is missing required files: #{missing_files.join(', ')}" if missing_files.any? | ||
|
|
||
| manifest = JSON.parse(entries['manifest.json']) | ||
| unless manifest['package_type'] == PACKAGE_TYPE | ||
| raise StandardError, "Unsupported questionnaire package type: #{manifest['package_type']}" | ||
| end | ||
|
|
||
| return if manifest['version'].to_i == VERSION | ||
|
|
||
| raise StandardError, "Unsupported questionnaire package version: #{manifest['version']}" | ||
| rescue JSON::ParserError => e | ||
| raise StandardError, "Invalid questionnaire package manifest: #{e.message}" | ||
| end |
There was a problem hiding this comment.
Validate that manifest.json agrees with the actual ZIP contents.
Right now this only checks package_type and version. If the manifest advertises question_advices.csv but the archive omits it, resolve_csv_sources passes nil through to parse_csv and the advice rows are silently treated as empty instead of rejecting the package.
🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 170-170: Modifier form of if makes the line too long.
(Style/IfUnlessModifier)
[convention] 170-170: Line is too long. [125/120]
(Layout/LineLength)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/questionnaire_package_import_service.rb` around lines 168 - 182,
The validate_package! method currently only checks package_type and version but
must also verify that any files advertised by the manifest actually exist in the
ZIP entries; update validate_package! to inspect the manifest for expected file
names (e.g., any CSVs like "question_advices.csv" or a manifest 'files' list)
and ensure each advertised filename is present in entries.keys, raising a
StandardError (with a clear message like "Questionnaire package missing
advertised file: <name>") when a manifest-listed file is missing so
resolve_csv_sources/parse_csv never receive nil.
| def import_items(rows, imported_questionnaires, skipped_questionnaire_keys, imported_counts) | ||
| imported_items = {} | ||
|
|
||
| rows.each do |row| | ||
| source_key = questionnaire_source_key(row['questionnaire_name'], row['questionnaire_instructor_name']) | ||
| next if skipped_questionnaire_keys.include?(source_key) | ||
|
|
||
| questionnaire = imported_questionnaires[source_key] | ||
| raise StandardError, "Unable to resolve questionnaire for item '#{row['txt']}'." if questionnaire.nil? | ||
|
|
||
| item = Item.new( | ||
| txt: row['txt'], | ||
| weight: row['weight'], | ||
| seq: row['seq'], | ||
| question_type: row['question_type'], | ||
| size: row['size'], | ||
| alternatives: row['alternatives'], | ||
| break_before: normalize_boolean(row['break_before']), | ||
| min_label: row['min_label'], | ||
| max_label: row['max_label'] | ||
| ) | ||
| item.questionnaire = questionnaire | ||
|
|
||
| imported_seq = item.seq | ||
| item.save! | ||
| item.update_column(:seq, imported_seq) if imported_seq.present? && item.seq.to_s != imported_seq.to_s | ||
|
|
||
| imported_items[item_source_key(row['questionnaire_name'], row['questionnaire_instructor_name'], row['seq'], row['txt'])] = item | ||
| imported_counts[:items] += 1 | ||
| end | ||
|
|
||
| imported_items | ||
| end | ||
|
|
||
| # Prefer package item keys, with a DB fallback for update flows. | ||
| def import_question_advices(rows, imported_questionnaires, imported_items, skipped_questionnaire_keys, imported_counts) | ||
| rows.each do |row| | ||
| source_key = questionnaire_source_key(row['questionnaire_name'], row['questionnaire_instructor_name']) | ||
| next if skipped_questionnaire_keys.include?(source_key) | ||
|
|
||
| questionnaire = imported_questionnaires[source_key] | ||
| raise StandardError, "Unable to resolve questionnaire for advice '#{row['advice']}'." if questionnaire.nil? | ||
|
|
||
| item = imported_items[item_source_key(row['questionnaire_name'], row['questionnaire_instructor_name'], row['item_seq'], row['item_txt'])] || | ||
| questionnaire.items.find_by(seq: row['item_seq'], txt: row['item_txt']) | ||
| raise StandardError, "Unable to resolve item for advice '#{row['advice']}'." if item.nil? | ||
|
|
||
| question_advice = QuestionAdvice.new( | ||
| score: row['score'], | ||
| advice: row['advice'] | ||
| ) | ||
| question_advice.item = item | ||
| question_advice.save! | ||
|
|
||
| imported_counts[:question_advices] += 1 | ||
| end |
There was a problem hiding this comment.
UpdateExistingRecordAction still duplicates items and advice on re-import.
Once a duplicate questionnaire resolves to the existing record, the child import path always does Item.new and QuestionAdvice.new. Re-importing the same package with update semantics appends a second copy of every item/advice instead of updating or replacing the existing dependent rows.
🧰 Tools
🪛 RuboCop (1.86.1)
[convention] 415-447: Assignment Branch Condition size for import_items is too high. [<9, 35, 6> 36.63/20]
(Metrics/AbcSize)
[convention] 415-447: Method has too many lines. [25/20]
(Metrics/MethodLength)
[convention] 442-442: Line is too long. [133/120]
(Layout/LineLength)
[convention] 450-450: Line is too long. [121/120]
(Layout/LineLength)
[convention] 458-458: Line is too long. [146/120]
(Layout/LineLength)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/questionnaire_package_import_service.rb` around lines 415 - 470,
The import path always constructs new child records (Item.new in import_items
and QuestionAdvice.new in import_question_advices) which causes duplicates
during update flows; modify import_items to first try to find an existing Item
(e.g., questionnaire.items.find_by(seq: row['seq'], txt: row['txt']) or
Item.find_by(questionnaire_id: questionnaire.id, seq: ..., txt: ...)) and then
update its attributes (or use find_or_initialize_by) instead of blindly creating
a new one, preserving the seq handling logic and saved record; likewise change
import_question_advices to locate an existing QuestionAdvice for the resolved
item (e.g., item.question_advices.find_by(score: row['score'], advice:
row['advice']) or find_or_initialize_by(item_id: item.id, score: ..., advice:
...)) and update/save rather than always calling QuestionAdvice.new so
re-imports update/replace child rows instead of appending duplicates.
Summary by CodeRabbit
New Features
Bug Fixes
Tests