Skip to content

[Volume - 9] Redis ZSET 기반 랭킹 시스템 구현 - 김동환#355

Merged
Hwan0518 merged 21 commits intoLoopers-dev-lab:Hwan0518from
Hwan0518:round9/ranking
Apr 13, 2026
Merged

[Volume - 9] Redis ZSET 기반 랭킹 시스템 구현 - 김동환#355
Hwan0518 merged 21 commits intoLoopers-dev-lab:Hwan0518from
Hwan0518:round9/ranking

Conversation

@Hwan0518
Copy link
Copy Markdown

@Hwan0518 Hwan0518 commented Apr 9, 2026

✅ Review Point

멘토링에서 언급해주셨던 "언제든 정책을 교체할 수 있는 scorer" 구조를 구현했습니다.

정책이 바뀌어도 scorer의 입력 파라미터(view/like/order count)는 동일하므로, 해당 카운터를 DB에 SoT로 영속화해두면 언제든 새 scorer로 재집계할 수 있겠다고 판단했습니다.

그런데 막상 재집계 방식을 설계하려다 보니, 정책 교체 시점에 하루치 이벤트 원본을 DB에서 전부 꺼내 다시 합산하는 건 비용이 너무 크다는 문제가 생겼습니다.

그래서 consumer가 이벤트를 처리할 때마다 ranking_daily_counter 테이블에 (stat_date, product_id) 단위로 카운터를 누적(upsert)해두는 방식을 택했습니다. rebuild 시에는 이미 합산된 이 테이블을 배치로 읽어 새 scorer에 바로 넣어 재계산하면 되므로, 상품 수만큼의 row만 처리하면 됩니다. 최종 점수는 ranking_daily_score에 별도로 스냅샷으로 저장합니다.

이렇게 재료(counter)와 결과(score)를 분리해 영속화하고, rebuild는 counter 테이블 기반으로 배치 수행하는 구조가 올바른 방향인지 여쭤보고 싶습니다. 혹시 이 접근에서 놓친 트레이드오프나 더 나은 패턴이 있을까요?

📌 Summary

  • 배경: 일별 인기 상품 랭킹을 노출해야 하지만, 좋아요/조회/주문 신호가 catalog/engagement/ordering에 흩어져 있고 실시간 업데이트가 필요했습니다.
  • 목표: Redis ZSET 기반 일별 랭킹을 실시간으로 갱신하고, 자정 전후에도 끊김 없는 랭킹을 제공합니다. 또한 Scorer 교체 시 언제든 재집계할 수 있는 rebuild-ready 구조를 갖춥니다.
  • 결과: ranking BC를 신규 생성해 commerce-streamer가 Kafka 이벤트를 DB(SoT)와 Redis(projection)에 dual-write하고, commerce-api가 ZREVRANGE로 페이지 조회를 제공합니다. 매일 23:50 carry-over 스케줄러로 다음 날 cold start를 방지하며, commerce-batch의 rebuild/reconcile job으로 scorer 교체 후 재집계와 Redis 복구를 수행합니다.

🧭 Context & Decision

문제 정의

  • 현재 동작/제약: 사용자에게 노출할 "오늘의 인기 상품" 데이터가 없습니다. 좋아요/조회/주문 신호가 각 BC에 흩어져 있습니다.
  • 문제(또는 리스크):
    • 집계 결과를 어느 BC에 두는지 불명확합니다. engagement에 두면 변경 원인이 충돌하고, catalog에 두면 성격이 맞지 않습니다.
    • 실시간 ZINCRBY 구조에서 정규화 방식을 선택해야 합니다. min-max는 이벤트 1건마다 전체 상품 재계산이 필요해 쓰기 증폭이 발생합니다.
    • 자정 직후 새 키가 비어 있으면 cold start가 발생합니다.
    • Cross-BC 통신 시 catalog ↔ ranking 양방향 의존으로 순환 참조 리스크가 있습니다.
    • Redis만 SoT로 사용하면 TTL 만료 후 재료가 사라져 Scorer 교체 시 과거 재집계가 불가능합니다.
  • 성공 기준(완료 정의):
    • GET /api/v1/rankings로 일별 페이지 조회가 가능해야 합니다.
    • GET /api/v1/products/{productId} 응답에 오늘의 rank가 노출되어야 합니다.
    • 자정 직후에도 빈 랭킹이 노출되지 않아야 합니다.
    • ranking 장애가 catalog/engagement에 전파되지 않아야 합니다.
    • Scorer 교체 후 POST /api-admin/v1/rankings/rebuild로 재집계가 가능해야 합니다.
    • Redis 장애 복구 후 ranking_projection_dirty 기반으로 자동 reconcile이 가능해야 합니다.

선택지와 결정

1. BC/패키지 구조 — ranking BC 신규 생성

  • 고려한 대안:

    • A: engagement BC에 통합합니다. 이미 좋아요 신호를 보유하고 있습니다.
    • B: catalog BC에 통합합니다. 상품 노출 경로와 가장 가깝습니다.
    • C: ranking BC를 신규 생성합니다.
  • 최종 결정: Cranking BC를 신규 생성합니다.

  • 근거:

    • 변경 원인 분리: 가중치 파라미터와 집계 로직의 변경 주기가 engagement와 무관합니다.
    • 생명주기 분리: 일간 집계 주기와 engagement 도메인의 생명주기가 다릅니다.
    • 장애 전파 방지: 랭킹 장애가 engagement 본 기능에 영향을 주어서는 안 됩니다.
    • 집계 성격: 여러 BC(catalog/engagement/ordering) 신호를 집계한 결과이므로 어느 한 BC에 귀속시키기 부적절합니다.
  • 앱별 역할:

    역할
    commerce-api Ranking 조회 API 제공, ZSET 조회, 상품 메타데이터 조립, carry-over 스케줄러, Admin rebuild API
    commerce-streamer Kafka 이벤트 수신, 가중치 계산, DB dual-write (SoT), Redis projection 업데이트
    commerce-batch Scorer 교체 후 재집계 (rebuild job), Redis 복구 (reconcile job)
  • 트레이드오프: 새 BC 추가로 Cross-BC ACL이 늘어나지만, 변경 원인이 다른 도메인을 강제로 묶지 않는 편이 낫다고 판단했습니다.

2. ZSET Score 모델 — Saturation 채택

  • Key 설계: ranking:all:{yyyyMMdd}를 사용합니다. all은 종합 랭킹을 의미하며 TTL은 2일입니다.

  • 고려한 대안:

    • A: min-max 정규화 — 정규화의 기본 후보입니다. 다만 이벤트 1건마다 전체 상품을 재계산해야 해서(쓰기 증폭) ZINCRBY 구조에 부적합합니다.
    • B: log/revenue 기반 — "오늘의 인기 상품"은 거래 빈도(quantity) 기준이므로 매출 기반은 부합하지 않습니다. 매출 기여 랭킹은 별도 admin 집계로 분리하는 편이 맞습니다.
    • C: Saturation sat(x, k) = x / (x + k) — 각 상품이 독립적으로 계산되므로 온라인 업데이트가 가능합니다 (O(1) per event).
  • 최종 결정: C — Saturation을 채택합니다.

    dailyScore(product) =
      0.15 * sat(viewCountDaily, 100) +
      0.35 * sat(likeCountDaily, 10) +
      0.50 * sat(orderQtyDaily, 3)
    
  • 가중치(0.15 / 0.35 / 0.50) 설계 근거:

    • 행동 비용과 구매 의도 순서에 맞춰 order > like > view 순으로 배분합니다.
    • view 0.15: 비용이 없고 비회원도 가능해 노이즈가 가장 큽니다. 가장 낮은 가중치를 부여합니다.
    • like 0.35: 로그인이 필요하고 의식적인 선호 표현입니다. view보다는 강하지만 실제 구매보다는 약합니다.
    • order 0.50: 실제 금전 비용이 발생하는 가장 강한 구매 신호입니다. 절반을 order에 배정해 "인기 상품"의 정의가 "많이 팔린 상품"에 수렴하도록 합니다.
    • 합계는 1.0으로 고정해 각 카운터가 Saturation 상한(1.0)에 도달해도 점수 총합의 상한이 1.0이 되도록 했습니다.
  • k값 설계 근거:

    • view k=100: 비용 0원 + 비회원 가능으로 발생 빈도가 압도적이기 때문에 100회를 "꽤 관심받는" 임계로 잡았습니다.
    • like k=10: 로그인 + 의식적 행동이 필요하므로 10개면 강한 참여 신호입니다.
    • order k=3: 실제 비용이 발생하고 depth가 가장 깊으므로 3건이면 매우 강한 구매 신호입니다.
  • order 지표: orderQtyDaily (수량 기반)을 사용합니다. "인기"는 얼마나 많이 팔렸는지 기준이며, 매출 기여 랭킹과 분리했습니다.

  • 카운터 보조 키: Saturation delta 계산을 위해 일간 카운터를 Redis HASH로 별도 유지합니다.

    ranking:counter:view:{yyyyMMdd}   // HASH, productId → 일간 조회수
    ranking:counter:like:{yyyyMMdd}   // HASH, productId → 일간 좋아요수
    ranking:counter:order:{yyyyMMdd}  // HASH, productId → 일간 주문수량
    
    • TTL은 ZSET과 동일하게 2일로 유지합니다.
    • 실시간 업데이트: HINCRBY로 카운터를 갱신해 old/new 값을 얻은 뒤 deltaScore = Scorer(newCounts) - Scorer(oldCounts)을 계산해 ZINCRBY로 반영합니다. (점수는 항목별 차이가 아니라 Scorer가 계산한 가중합 전체의 차이입니다.)
  • 트레이드오프: Saturation은 상위 상품 사이의 변별력이 낮아질 수 있지만, k값과 가중치로 보정합니다.

3. Kafka Consumer — 배치 리스너 + 상품별 delta 합산

  • 구독 토픽: catalog-events, order-events (consumer group: ranking-collector).

  • 소비 이벤트: PRODUCT_VIEWED → viewDelta +1, PRODUCT_LIKED → likeDelta +1, PRODUCT_UNLIKED → likeDelta -1, ORDER_PAID → 주문 항목별 orderDelta += quantity.

  • 고려한 대안:

    • A: 단건 처리 — 이벤트 1건마다 카운터 갱신, ZSET 갱신, event_handled INSERT, ack을 모두 수행합니다.
    • B: 배치 리스너 — 배치 단위로 수신해 event_handled 멱등 필터, 상품별 delta 합산, ack을 한 번에 처리합니다.
  • 최종 결정: B — 배치 리스너를 채택합니다.

  • 처리 흐름:

    1. batch listener로 이벤트를 수신합니다.
    2. event_handled로 멱등 필터를 적용합니다. 이미 처리된 eventId는 제거합니다.
    3. 배치 내부에서 (stat_date, product_id) 단위로 (viewDelta, likeDelta, orderDelta)를 합산합니다 (Map<DailyKey, long[]> deltas).
    4. DB 단일 TX: ranking_daily_counter upsert + ranking_daily_score organic_score 재계산 upsert + event_handled bulk insert.
    5. Redis best-effort (TX 밖): HINCRBY로 카운터 갱신 → deltaScore 계산 → ZINCRBY 반영. 실패 시 ranking_projection_dirty mark.
    6. ack (Redis 실패해도 ack — DB가 SoT).
  • 배치 처리 이점: Redis 점수 반영은 배치 내 고유 상품 수 단위로 줄어들고, DB와 Kafka는 배치 단위로 TX와 round-trip을 묶어 호출 횟수를 줄입니다.

    항목 단건 처리 배치 처리
    상품별 점수 반영 횟수 (HINCRBY + ZINCRBY) 이벤트 수 만큼 배치 내 고유 상품 수 만큼
    DB 쓰기 (event_handled) 이벤트당 1 TX 배치당 1 TX (bulk insert)
    Kafka ack 이벤트당 1회 배치당 1회
  • 한계 및 실제 효과:

    • worst case에서는 배치 내 모든 이벤트가 다른 상품이라 Redis 쓰기 절감이 없습니다.
    • 실제 이커머스 이벤트는 멱법칙 분포를 따라 상위 상품에 집중되므로 절감 효과가 큽니다.
    • 절감의 안정적인 이득은 DB TX 수(event_handled bulk)와 ack 수에서 옵니다.

4. Ranking API & Cross-BC 통신 — 양방향 ACL + Facade 분리로 순환 차단

  • 엔드포인트:

    API 엔드포인트
    랭킹 목록 GET /api/v1/rankings?date=yyyyMMdd&page=0&size=20
    상품 상세 (랭킹 포함) GET /api/v1/products/{productId}
    랭킹 재집계 (Admin) POST /api-admin/v1/rankings/rebuild
  • 날짜 파라미터 의미: date는 선택 파라미터입니다. 미지정 시 오늘(ranking:all:{오늘})을 기준으로 조회하며, 특정 날짜를 명시하면 해당 키를 조회합니다. carry-over 덕분에 자정 직후에도 오늘 키가 비어있지 않아 "데이터 없음" 문제가 발생하지 않습니다.

  • 고려한 대안:

    • A: 단방향 통신 — 한쪽만 ACL로 두는 구조.
    • B: 양방향 ACL — ProductRankingReader (catalog→ranking) + RankingProductReader (ranking→catalog).
  • 최종 결정: B를 채택하되, ranking 측에 RankingRankFacade를 별도로 두어 순환 참조를 차단합니다.

    • A. catalog → ranking (GET /api/v1/products/{productId} rank 보강)
      • Port: catalog/application/port/out/client/ranking/ProductRankingReader
      • 구현체: catalog/infrastructure/acl/ranking/ProductRankingReaderImpl
      • 진입점: RankingRankFacadeRankingRankService에만 의존하므로 catalog 의존이 섞이지 않습니다.
      • 조회: ZREVRANK ranking:all:{오늘} productId. 결과가 null이면 rank=null로 반환합니다.
    • B. ranking → catalog (GET /api/v1/rankings 상품 정보 조합)
      • Port: ranking/application/port/out/client/catalog/RankingProductReader
      • 구현체: ranking/infrastructure/acl/catalog/RankingProductReaderImpl
      • 조회: idIn(idList) batch 조회로 N+1을 방지합니다.
  • 순환 차단 설계:

    • RankingRankFacade (rank 단건 조회) — RankingRankService에만 의존하므로 catalog ACL이 호출해도 안전합니다.
    • RankingQueryFacade (페이지 조회) — catalog ACL에 의존하지만 catalog 쪽에서는 호출하지 않습니다.
    • 두 Facade를 분리하여 의존 그래프가 단방향으로 정리됩니다.
  • 장애 격리: ProductQueryService.getProductRank는 ranking 예외를 catch하여 rank=null을 반환합니다. ranking 장애가 상품 상세 조회로 전파되지 않도록 보조 정보 수준에서 격리합니다.

  • 랭킹 목록 응답: 아이템(상품 id, 이름, 가격, brand 등 + rank)과 페이지 메타(page, size, totalElements)로 구성합니다.

5. Cold Start — 23:50 Score Carry-Over 스케줄러 (DB 기반)

  • 고려한 대안:

    방식 트레이드오프
    23:50 스케줄러 자정 직후 빈 키 노출이 없습니다. 다만 23:50 이후 10분간 발생한 이벤트는 다음 날 키로 carry-over되지 않습니다.
    00:00:05 스케줄러 데이터는 정확하지만 자정 직후 5초 + carry-over 실행 중 빈 키 노출 구간이 발생합니다.
    Lazy init 첫 요청 유저가 carry-over 완료를 대기해야 하며, 동시 요청 시 중복 실행 위험이 있습니다.
  • 최종 결정: 23:50 스케줄러를 채택합니다.

  • 선택 근거: 랭킹 시스템의 목적은 유저 관심 유도이므로 빈 랭킹 노출과 첫 요청 대기는 가장 큰 손실로 보았습니다. 따라서 자정 직후 빈 구간이 생기는 00:00:05Lazy init을 먼저 제외했습니다. 남은 선택지는 23:50뿐이며, 마지막 10분 이벤트 누락은 다음 날 실제 이벤트가 carry-over 초기 점수(전일 × 0.1)를 빠르게 추월하므로 수용 가능하다고 판단했습니다. 즉 정확도 손실을 감수하고 빈 키 노출을 0으로 만드는 UX 우선 결정입니다.

  • carry-over 공식: ranking_daily_score 에서 오늘 row를 읽어 (organic_score + carry_score) * 0.1을 내일 row의 carry_score로 upsert합니다. Redis는 best-effort로 반영하며 실패 시 ranking_projection_dirty에 mark합니다.

  • carry-over의 역할:

    • cold start 완화: 자정 이후 새 키가 비어있지 않도록 초기 점수를 제공합니다.
    • 연속성 보정: tumbling window가 단절될 때 전날 트렌드의 관성을 일부 유지합니다.
    • carry chain 보존: Redis 간 ZUNIONSTORE 의존 대신 DB 기반이므로 Redis 장애 후 chain이 끊기지 않습니다.

6. Rebuild-ready 설계 — DB SoT + Redis Projection

  • 목표: Scorer 교체 후 과거 날짜 재집계와 Redis 장애 복구를 단일 경로로 수렴시킵니다.

  • DB 테이블:

    테이블 역할
    ranking_daily_counter Scorer 입력 재료 (view/like/order 일간 집계, signed raw)
    ranking_daily_score Scorer 출력 스냅샷 (organic_score + carry_score)
    ranking_projection_dirty Redis 복구 큐 (REDIS_WRITE_FAIL / CARRY_OVER_FAIL / MANUAL)
  • Consumer TX 경계:

    • Step 1 — DB 단일 TX: ranking_daily_counter upsert + ranking_daily_score organic_score upsert + event_handled bulk insert. 멱등 보장 (재처리 시 double-count 불가).
    • Step 2 — Redis best-effort: TX 밖에서 HINCRBY + ZINCRBY. 실패 시 ranking_projection_dirty mark (별도 짧은 TX).
    • Step 3 — ack: Redis 실패해도 ack. DB가 SoT이므로 reconcile이 자동 복구.
  • Rebuild Job (commerce-batch): ranking_daily_counter를 날짜순으로 cursor 조회 → 새 Scorer로 재계산 → ranking_daily_score 덮어쓰기 → Redis ZSET 재생성. 오늘 날짜는 제외 (live consumer 충돌 방지).

  • Reconcile Job (commerce-batch, 매 5분 cron): ranking_projection_dirty에서 미해소 날짜 조회 → rebuild dayLoop 재사용 → resolved_at 갱신. 코드 단일 경로 (rebuild와 reconcile이 같은 로직 공유).

7. Admin Rebuild API

  • 엔드포인트: POST /api-admin/v1/rankings/rebuild
  • 요청 파라미터: from, to (날짜 범위), scorerType, carryOverWeight, dryRun
  • 처리: RankingRebuildFacade가 batch JobLauncher를 호출하고 JobExecution id를 반환합니다.
  • Scorer 교체 지원: rebuild 시 Map<String, RankingScorer>로 모든 구현체를 주입받아 scorerType으로 lookup합니다. live consumer는 단일 active scorer bean만 사용합니다.

추후 개선 여지

  • 매출 기여 기준의 별도 admin 랭킹을 분리합니다.
  • Saturation k값 / 가중치 / carry-over 0.1을 데이터 기반으로 재튜닝합니다.
  • ranking_event_fact (raw event 영속화 — 감사/장기 replay) Phase 2에서 도입합니다.

🏗️ Design Overview

변경 범위

  • 영향 받는 모듈/도메인:
    • apps/commerce-apiranking BC (조회, carry-over, Admin rebuild API), catalog/product (rank 보강용 ACL)
    • apps/commerce-streamerranking BC (점수 적재, DB dual-write)
    • apps/commerce-batchranking batch job (rebuild, reconcile)
  • 신규 추가:
    • ranking BC (commerce-api + commerce-streamer + commerce-batch)
    • DB 테이블: ranking_daily_counter, ranking_daily_score, ranking_projection_dirty
    • catalog/product → ranking ACL (ProductRankingReader)
    • ranking → catalog ACL (RankingProductReader)
    • 23:50 carry-over 스케줄러 (RankingCarryOverScheduler, DB 기반으로 재작성)
    • GET /api/v1/rankings 신규 엔드포인트
    • POST /api-admin/v1/rankings/rebuild 신규 Admin 엔드포인트
    • RankingRebuildJob / RankingReconcileJob (commerce-batch)
  • 제거/대체: 없습니다.

주요 컴포넌트 책임

  • commerce-streamer / ranking
    • RankingCollectorConsumer: catalog-events, order-events 토픽을 배치로 구독하고 event_handled로 멱등 필터링합니다. (stat_date, product_id) 단위로 delta를 합산합니다.
    • RankingScoreService: DB 단일 TX (counter upsert + score upsert + event_handled bulk insert) 후 Redis best-effort 적재. Redis 실패 시 dirty mark.
    • LinearScorer / SaturationScorer / ConversionScorer: 점수 계산 전략입니다. 운영은 SaturationScorer를 사용합니다.
    • RankingRedisAdapter (streamer): ZSet/HASH 적재와 TTL을 관리합니다.
    • RankingDailyCounterCommandAdapter / RankingDailyScoreCommandAdapter / RankingProjectionDirtyCommandAdapter: DB 영속화 어댑터입니다.
  • commerce-api / ranking
    • RankingQueryService / RankingQueryFacade / RankingQueryController: GET /api/v1/rankings 페이지 조회와 상품 정보 조합을 담당합니다.
    • RankingRankService + RankingRankFacade: 단건 rank 조회를 담당합니다. catalog ACL 진입점이며 순환 차단을 위해 분리했습니다.
    • RankingCarryOverService + RankingCarryOverScheduler: 매일 23:50에 DB 기반 carry-over를 실행하고 Redis에 반영합니다.
    • RankingRebuildFacade + RankingRebuildAdminController: Admin rebuild 명령을 수신해 batch JobLauncher를 호출합니다.
    • RankingRedisAdapter (api): ZREVRANGE / ZCARD / ZREVRANK / ZADD / EXPIRE를 제공합니다.
    • RankingProductReader(Impl): ranking → catalog 일괄 상품 조회 ACL입니다. N+1을 방지합니다.
  • commerce-api / catalog
    • ProductRankingReader(Impl): catalog → ranking 단건 rank 조회 ACL입니다.
    • ProductQueryService.getProductRank: ranking 장애 시 null로 폴백하여 보조 정보를 격리합니다.
    • ProductQueryFacade.getProduct: 캐시된 상품 응답에 rank를 보강합니다.
  • commerce-batch / ranking
    • RankingRebuildJobConfig + RankingRebuildTasklet: ranking_daily_counter를 날짜순 cursor 조회 → Scorer 재계산 → ranking_daily_score 덮어쓰기 → Redis ZSET 재생성.
    • RankingReconcileJobConfig + RankingReconcileTasklet: ranking_projection_dirty 미해소 날짜를 조회해 rebuild dayLoop를 재사용, resolved_at 갱신.

🔁 Flow Diagram

Main Flow — 적재 (Streamer, DB dual-write)

sequenceDiagram
  autonumber
  participant Kafka
  participant Consumer as RankingCollectorConsumer
  participant Service as RankingScoreService
  participant Scorer as SaturationScorer
  participant DB
  participant Redis

  Kafka->>Consumer: batch poll (catalog-events / order-events)
  Consumer->>Consumer: event_handled 멱등 필터
  Consumer->>Consumer: (stat_date, product_id) 단위 delta 합산
  Consumer->>Service: applyDeltas(deltas)
  Note over Service,DB: Step 1 — DB 단일 TX
  Service->>DB: ranking_daily_counter upsert (delta 누적)
  Service->>Scorer: calculateScore(clamp(counter))
  Scorer-->>Service: organic_score
  Service->>DB: ranking_daily_score upsert (organic_score)
  Service->>DB: event_handled bulk insert
  Note over Service,Redis: Step 2 — Redis best-effort (TX 밖)
  Service->>Redis: HINCRBY ranking:counter:type:date
  Redis-->>Service: old / new count
  Service->>Redis: ZINCRBY ranking:all:date by deltaScore
  Note over Service: Redis 실패 시 ranking_projection_dirty mark
  Service-->>Consumer: return
  Consumer->>Kafka: ack.acknowledge()
Loading

Main Flow — 페이지 조회 (API)

sequenceDiagram
  autonumber
  participant Client
  participant Controller as RankingQueryController
  participant Facade as RankingQueryFacade
  participant Service as RankingQueryService
  participant Redis
  participant CatalogACL as RankingProductReader

  Client->>Controller: GET /api/v1/rankings
  Controller->>Facade: getRankings(date, page, size)
  Facade->>Service: getRankedProductEntries(date, page, size)
  Service->>Redis: ZREVRANGE + ZCARD ranking:all:date
  Redis-->>Service: ProductScoreEntry list, total
  Service-->>Facade: RankedResult
  Facade->>Service: readProducts(productIds)
  Service->>CatalogACL: readProducts(productIds)
  CatalogACL->>CatalogACL: ProductQueryFacade.findCacheDtosByIds
  CatalogACL-->>Service: RankingProductInfo list
  Service-->>Facade: 상품 정보 조합
  Facade-->>Controller: RankingPageOutDto
  Controller-->>Client: RankingPageResponse
Loading

Main Flow — 상품 상세 rank 보강 (catalog → ranking)

sequenceDiagram
  autonumber
  participant Client
  participant ProductFacade as ProductQueryFacade
  participant ProductService as ProductQueryService
  participant RankingACL as ProductRankingReader
  participant Ranking as RankingRankFacade
  participant Redis

  Client->>ProductFacade: GET /api/v1/products/{productId}
  ProductFacade->>ProductService: getOrLoadProductDetail (캐시)
  ProductService-->>ProductFacade: ProductDetailOutDto (rank=null)
  ProductFacade->>ProductService: getProductRank(productId)
  ProductService->>RankingACL: getProductRank(productId)
  RankingACL->>Ranking: getProductRank(today, productId)
  Ranking->>Redis: ZREVRANK ranking:all:today productId
  Redis-->>Ranking: 0-based rank or null
  Ranking-->>RankingACL: 1-based rank or null
  Note over ProductService: ranking 장애 시 catch, null 폴백
  ProductService-->>ProductFacade: rank
  ProductFacade-->>Client: ProductDetailResponse (rank 보강)
Loading

Carry-Over Flow (매일 23:50, DB 기반)

sequenceDiagram
  autonumber
  participant Sched as RankingCarryOverScheduler
  participant Service as RankingCarryOverService
  participant DB
  participant Redis

  Note over Sched: 매일 23:50 트리거
  Sched->>Service: carryOver()
  Service->>DB: SELECT organic_score, carry_score FROM ranking_daily_score WHERE stat_date = today (cursor)
  loop 상품별
    Service->>DB: INSERT ranking_daily_score (tomorrow, carry_score = total * 0.1) ON DUPLICATE KEY UPDATE
    Service->>Redis: ZADD ranking:all:tomorrow carry_score productId (best-effort)
  end
  Service->>Redis: EXPIRE ranking:all:tomorrow 2d
  Note over Service: Redis 실패 시 ranking_projection_dirty mark (CARRY_OVER_FAIL)
Loading

Rebuild Flow (Admin API → Batch Job)

sequenceDiagram
  autonumber
  participant Admin
  participant Facade as RankingRebuildFacade
  participant Job as RankingRebuildJob
  participant DB
  participant Redis

  Admin->>Facade: POST /api-admin/v1/rankings/rebuild {from, to, scorerType, dryRun}
  Facade->>Job: JobLauncher.run(RankingRebuildJob, params)
  Job-->>Facade: JobExecution id
  Facade-->>Admin: jobExecutionId

  loop date in [from..to] (오늘 제외)
    Job->>DB: SELECT * FROM ranking_daily_counter WHERE stat_date = date (cursor)
    Job->>DB: SELECT organic+carry FROM ranking_daily_score WHERE stat_date = date-1 (carry 계산)
    Job->>Job: Scorer.calculateScore(clamp(counter)) + carry * weight
    alt dryRun = false
      Job->>DB: ranking_daily_score upsert (organic_score, carry_score 덮어쓰기)
      Job->>Redis: DEL ranking:all:date → ZADD batch → EXPIRE 2d
    end
  end
Loading

Reconcile Flow (매 5분 cron)

sequenceDiagram
  autonumber
  participant Job as RankingReconcileJob
  participant DB
  participant RebuildLogic as RebuildDayLoop

  Job->>DB: SELECT stat_date FROM ranking_projection_dirty WHERE resolved_at IS NULL
  loop dirty date
    Job->>RebuildLogic: rebuildDay(date, currentScorerType)
    Note over RebuildLogic: rebuild dayLoop 재사용 (단일 경로)
    Job->>DB: UPDATE ranking_projection_dirty SET resolved_at = NOW()
  end
Loading

변경 목적

Redis ZSET 기반의 일일 "오늘의 인기 상품" 랭킹 시스템을 새로운 ranking 바운디드 컨텍스트로 구현합니다. 조회/좋아요/주문 신호를 다중 BC에서 집계하여 실시간 순위를 제공합니다.

핵심 변경점

  • commerce-api: 랭킹 조회 API(GET /api/v1/rankings), 상품 상세에 오늘 순위 추가, 23:50 carry-over 스케줄러(어제 점수 × 0.1을 내일로), 관리자 재구축 API(POST /api-admin/v1/rankings/rebuild)
  • commerce-streamer: Kafka 배치 컨슈머에서 이벤트 수집→DB 카운터 upsert(트랜잭션)→Redis 프로젝션 업데이트, 점수 계산은 Saturation 함수(view 0.15/k=100, like 0.35/k=10, order 0.50/k=3) 기반
  • commerce-batch: 랭킹 재구축(rebuild) 및 프로젝션 동기화(reconcile) 배치 작업
  • 신규 테이블: ranking_daily_counter(일간 신호 카운트), ranking_daily_score(점수 저장), ranking_projection_dirty(Redis 실패 추적)

리스크/주의사항

듀얼 라이트(DB + Redis) 패턴으로 Redis 쓰기 실패 시 ranking_projection_dirty 테이블에 표시하고 별도 reconcile 작업으로 복구하며, Kafka 멱등성 필터링과 idempotency service로 중복 처리를 방지합니다. 자정 콜드스타트 문제는 23:50에 carry-over 스케줄러 실행으로 대응하고, 랭킹 조회 실패는 rank=null로 폴백합니다. 확인 질문: 프로덕션 환경에서 rebuild 작업 시 오늘 날짜를 제외하는 이유가 진행 중인 컨슈머와의 경합을 피하기 위함이 맞는지, 그리고 carry-over weight(기본 0.1)를 관리자가 조정할 필요가 있는 비즈니스 시나리오가 있는지 확인 필요합니다.

테스트/검증

E2E 테스트(RankingQueryControllerE2ETest, RankingAdminCommandControllerE2ETest)에서 MySQL/Redis 테스트 컨테이너로 실제 동작 검증하고, 단위 테스트(RankingQueryFacadeTest, RankingCarryOverServiceTest, RankingScoreServiceTest 등)에서 각 계층의 동작을 Mockito로 검증합니다. 총 50+ 테스트 케이스로 Kafka 배치 처리, 점수 계산 정확도, Redis 실패 복구, idempotency 필터링 등을 커버합니다.

Copilot AI review requested due to automatic review settings April 9, 2026 16:06
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 9, 2026

📝 Walkthrough

Walkthrough

랭킹 서브시스템을 신규 추가한다. Redis ZSET 기반 일별 랭킹 저장/조회, Kafka 이벤트 기반 점수 집계와 Redis 반영, 일별 점수 이월(carry-over) 배치/스케줄러, 랭킹 조회·관리용 API·배치·테스트가 포함된다.

Changes

Cohort / File(s) Summary
상품 조회 확장
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java, apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java, apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java, apps/commerce-api/src/main/java/com/loopers/catalog/product/interfaces/web/response/ProductDetailResponse.java
상품 상세 응답에 오늘 기준 랭킹(rank) 필드 추가 및 조회 로직 보강. ProductQueryService에 ProductRankingReader 의존 추가, rank 조회 실패는 무시하도록 예외 처리함.
랭킹 조회 API 및 페이징 응답
apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java, apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingQueryFacade.java, apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java, apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRankFacade.java, apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java, apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/acl/catalog/RankingProductReaderImpl.java, apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingItemResponse.java, apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java
GET /api/v1/rankings 엔드포인트, date/page/size 파라미터 정규화, Redis에서 순위 항목 조회 후 cross-BC로 상품 메타를 일괄 조회해 1-based rank 포함 응답 생성.
Redis 어댑터(조회·운영용)
apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
읽기전용 랭킹 포트 구현: top 조회, zset 크기, reverseRank 등. carryOver용 Lua 스크립트 및 TTL 설정, batch ZADD 지원.
관리 재빌드 API 및 DTO
apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/in/AdminRankingRebuildInDto.java, apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRebuildFacade.java, apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingAdminCommandController.java, apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/request/AdminRankingRebuildRequest.java, apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/AdminRankingRebuildResponse.java
관리자용 랭킹 재빌드 요청 API 추가. 입력 검증(날짜 포맷·범위), 기본값 적용(scorerType, carryOverWeight), 202 응답 구조 제공.
이월(캐리오버) 서비스·스케줄러·JDBC 어댑터
apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java, apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java, apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/jdbc/RankingDailyScoreJdbcAdapter.java
당일 스냅샷을 기반으로 내일로 carryOver 계산하여 DB upsert 및 Redis에 batch ZADD로 반영, 실패는 로깅 처리. 스케줄러는 매일 23:50 실행 조건부 등록.
스트리머(이벤트 처리) 핵심: 점수 집계 · Redis 반영
apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java, apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java, apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java, apps/commerce-streamer/src/main/java/com/loopers/ranking/application/dto/RankingDailyKey.java
Kafka 배치 소비자 추가, 이벤트별(뷰/좋아요/주문) 델타 집계, DB 업서트로 영구화 후 scorer로 점수 산정, Redis 반영(증분·TTL), idempotency 처리 및 projection dirty 마킹 지원.
스코어러 구현군
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/SaturationScorer.java, .../ConversionScorer.java, .../LinearScorer.java, apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/scorer/RankingScorerConfig.java, apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/scorer/RankingScorer.java
세 가지 점수 알고리즘 추가(SATURATION 기본 Bean, CONVERSION/LINEAR 구현). 배치용 scorer 맵 구성으로 배치 재빌드에서 선택 사용 가능.
배치: 재빌드·리콘실리(재투영) Tasklet 및 Job 설정
apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/RankingRebuildTasklet.java, .../RankingReconcileTasklet.java, .../RankingRebuildJobConfig.java, .../RankingReconcileJobConfig.java
Batch Job/Step/Tasklet 추가. 날짜 범위별 재빌드 로직(DB upsert, Redis 재생성, carryOver 적용) 및 projection dirty 재해결 플로우 구현.
영속성·엔티티·JPA·JDBC 어댑터
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/*, .../jpa/*, .../RankingDailyCounterCommandAdapter.java, .../RankingDailyScoreCommandAdapter.java, .../RankingProjectionDirtyAdapter.java
ranking_daily_counter/score/dirty 관련 JPA 엔티티·Id 클래스·JPA 리포지토리 및 JDBC 기반 읽기/쓰기 어댑터 추가. 배치·서비스에서 사용되는 upsert/조회 구현 포함.
테스트·E2E·설정
apps/commerce-api/src/test/..., apps/commerce-streamer/src/test/..., apps/commerce-api/src/test/resources/application-test.yml, .http/ranking.http
단위·통합·E2E 테스트 대량 추가(Topic/Kafka 제외 대부분 Mockito + Testcontainers). 테스트 전용으로 carryover scheduler 비활성화 설정 추가. HTTP 요청 컬렉션(.http) 추가.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant RankingQueryController
    participant RankingQueryFacade
    participant RankingQueryService
    participant RankingRedisAdapter
    participant RankingProductReader
    participant ProductQueryFacade

    Client->>RankingQueryController: GET /api/v1/rankings?date=&page=&size=
    RankingQueryController->>RankingQueryFacade: getRankings(date,page,size)
    RankingQueryFacade->>RankingQueryService: getRankedProductEntries(date,page,size)
    RankingQueryService->>RankingRedisAdapter: getTopProducts(date,offset,count)
    RankingRedisAdapter-->>RankingQueryService: List<ProductScoreEntry>
    RankingQueryService->>RankingRedisAdapter: getZSetSize(date)
    RankingRedisAdapter-->>RankingQueryService: totalElements
    RankingQueryService-->>RankingQueryFacade: RankedResult
    RankingQueryFacade->>RankingQueryService: readProducts(productIds)
    RankingQueryService->>RankingProductReader: readProducts(productIds)
    RankingProductReader->>ProductQueryFacade: findCacheDtosByIds(productIds)
    ProductQueryFacade-->>RankingProductReader: List<ProductCacheDto>
    RankingProductReader-->>RankingQueryService: List<RankingProductInfo>
    RankingQueryService-->>RankingQueryFacade: List<RankingProductInfo>
    RankingQueryFacade-->>RankingQueryController: RankingPageOutDto
    RankingQueryController-->>Client: 200 OK (RankingPageResponse)
Loading
sequenceDiagram
    participant Kafka
    participant RankingCollectorConsumer
    participant RankingScoreService
    participant RankingRedisAdapter
    participant EventIdempotencyService

    Kafka->>RankingCollectorConsumer: batch records
    RankingCollectorConsumer->>RankingScoreService: findAlreadyHandledIds(eventIds)
    RankingScoreService->>EventIdempotencyService: findAlreadyHandledIds(eventIds,"ranking-collector")
    EventIdempotencyService-->>RankingScoreService: handledIds
    RankingCollectorConsumer->>RankingCollectorConsumer: aggregate deltas per (date,product)
    alt deltas non-empty
        RankingCollectorConsumer->>RankingScoreService: persistDeltas(deltas,newEventIds)
        RankingScoreService->>RankingRedisAdapter: incrementCounterAndGetCounts(date,product,...)
        RankingRedisAdapter-->>RankingScoreService: CounterResult(old,new)
        RankingScoreService->>RankingRedisAdapter: incrementScore(date,product,scoreDelta)
        RankingScoreService->>RankingRedisAdapter: ensureTtl(date,TTL)
        RankingScoreService->>EventIdempotencyService: markHandledBatch(newEventIds,"ranking-collector")
    end
    RankingCollectorConsumer->>Kafka: ack.acknowledge()
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~65 minutes

Possibly related PRs

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Redis ZSET 기반의 일별 인기 상품 랭킹 시스템을 신규 ranking BC로 분리해 구현하고, commerce-streamer가 Kafka 이벤트를 소비해 점수를 적재하며 commerce-api가 랭킹/상품 rank 조회 API를 제공하도록 연결한 PR입니다. 자정 콜드 스타트를 완화하기 위한 carry-over 스케줄러와 cross-BC ACL(카탈로그↔랭킹)도 포함합니다.

Changes:

  • commerce-streamer: Kafka 배치 컨슈머 + saturation 기반 스코어 계산 + Redis ZSET/HASH 적재 및 멱등 처리
  • commerce-api: GET /api/v1/rankings 신규, 상품 상세 응답에 rank 보강, carry-over 스케줄러 및 catalog/ranking ACL 추가
  • Round9 문서(요구사항/결정/학습/검증) 및 테스트(E2E/단위/통합) 추가

Reviewed changes

Copilot reviewed 43 out of 56 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
round9-docs/00-requirements.md Round 9 과제 요구사항 문서 추가
round9-docs/01-information.md 랭킹 시스템 개요/학습용 문서 추가
round9-docs/02-study-data.md Redis ZSET/키전략/TTL/콜드스타트 학습 정리 추가
round9-docs/03-study-deepdive.md 특정 주제 deep dive 기록 추가
round9-docs/04-decisions.md BC 분리/스코어 모델/컨슈머/API/콜드스타트 의사결정 문서 추가
round9-docs/05-e2e-verification.md E2E 검증 절차/결과 문서 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java 랭킹 집계용 Kafka 배치 컨슈머 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java Redis 카운터 갱신 + delta 스코어 계산 + 멱등 마킹 서비스 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java 랭킹 적재용 Redis 어댑터(HASH/ZSET/TTL) 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/SaturationScorer.java 기본 스코어러(saturation) 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java 대안 스코어러(선형) 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/ConversionScorer.java 대안 스코어러(전환율) 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingScorer.java scorer 포트 인터페이스 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingRedisPort.java streamer Redis 포트 정의 추가
apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/CounterResult.java 카운터 old/new 결과 record 추가
apps/commerce-streamer/src/test/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumerTest.java 컨슈머 델타 합산/멱등/에러 단위 테스트 추가
apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java 스코어 계산/Redis 포트 호출/멱등 단위 테스트 추가
apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapterTest.java Redis 어댑터 통합 테스트(Testcontainers) 추가
apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/scorer/ScorerComparisonTest.java scorer 3종 비교 테스트 추가
apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java 랭킹 목록 조회 엔드포인트 추가
apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java 랭킹 페이지 응답 DTO 추가
apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingItemResponse.java 랭킹 아이템 응답 DTO 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingQueryFacade.java 랭킹 조회 조합(랭킹+상품정보) 파사드 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRankFacade.java rank 단건 조회 전용 파사드 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java ZSET 조회 + catalog ACL 호출 서비스 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java ZREVRANK 기반 단건 rank 조회 서비스 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java carry-over 실행 서비스 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java 23:50 carry-over 스케줄러 추가(프로퍼티로 on/off)
apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java API용 Redis 조회/캐리오버 어댑터 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/RankingRedisPort.java API용 Redis 포트 정의 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/ProductScoreEntry.java ZSET 조회 결과 record 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankedResult.java 서비스→파사드 전달 DTO 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankingPageOutDto.java 랭킹 페이지 out DTO 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankingItemOutDto.java 랭킹 아이템 out DTO 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/client/catalog/RankingProductReader.java ranking→catalog 상품조회 포트 추가
apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/client/catalog/RankingProductInfo.java ranking 목록 조합용 상품정보 DTO 추가
apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/acl/catalog/RankingProductReaderImpl.java ranking→catalog ACL 구현 추가
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/client/ranking/ProductRankingReader.java catalog→ranking rank 조회 포트 추가
apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java catalog→ranking ACL 구현 추가
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java 상품 상세에 rank 보강을 위한 ranking 조회/폴백 로직 추가
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java 상품 상세 응답에 rank를 보강하도록 변경
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductDetailOutDto.java 상품 상세 out DTO에 rank 필드 추가
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/cache/dto/ProductCacheDto.java PDP DTO 변환 시 rank=null 기본값 반영
apps/commerce-api/src/main/java/com/loopers/catalog/product/interfaces/web/response/ProductDetailResponse.java 상품 상세 응답에 rank 필드 추가
apps/commerce-api/src/test/resources/application-test.yml 테스트에서 carry-over 스케줄러 비활성화 설정 추가
apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java 랭킹 목록/상품 rank 포함 E2E 테스트 추가
apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java rank 단건 조회 단위 테스트 추가
apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java 랭킹 엔트리 조회 단위 테스트 추가
apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java carry-over 서비스 단위 테스트 추가
apps/commerce-api/src/test/java/com/loopers/ranking/application/scheduler/RankingCarryOverSchedulerTest.java 스케줄러 예외 삼킴 동작 단위 테스트 추가
apps/commerce-api/src/test/java/com/loopers/ranking/application/facade/RankingQueryFacadeTest.java 상품정보 join/skip/페이지 rank 계산 단위 테스트 추가
apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java ranking 조회 실패 시 null 폴백 테스트 추가
apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java 상품 상세 응답 rank 보강 테스트 추가
apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java ProductDetailOutDto 시그니처 변경(rank 추가) 반영
apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java ProductDetailOutDto 시그니처 변경(rank 추가) 반영
.http/ranking.http 로컬에서 랭킹 API 호출용 HTTP 스크립트 추가

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +72 to +79
// 3. TTL 보장 (ZSET + 3 counter keys)
@Override
public void ensureTtl(String dateStr, Duration ttl) {
redisTemplate.expire(ZSET_PREFIX + dateStr, ttl);
redisTemplate.expire(COUNTER_VIEW_PREFIX + dateStr, ttl);
redisTemplate.expire(COUNTER_LIKE_PREFIX + dateStr, ttl);
redisTemplate.expire(COUNTER_ORDER_PREFIX + dateStr, ttl);
}
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ensureTtl()이 배치 처리마다 expire()를 호출해서 TTL이 지속적으로 리셋됩니다. 이 방식이면 이벤트가 계속 들어오는 동안 키가 사실상 만료되지 않아(2일 슬라이딩) 메모리 회수가 지연될 수 있고, 불필요한 Redis 호출도 증가합니다. TTL이 설정되지 않은 경우에만 만료를 설정하도록(예: 현재 TTL 확인 후 -1일 때만 설정, 또는 Redis의 NX 옵션을 활용) 변경해 주세요.

Copilot uses AI. Check for mistakes.
Comment on lines +84 to +98
// 4. carry-over: ZUNIONSTORE dest 1 src WEIGHTS weight (Lua script)
@Override
public void carryOver(String fromDateStr, String toDateStr, double weight) {
String srcKey = ZSET_PREFIX + fromDateStr;
String destKey = ZSET_PREFIX + toDateStr;

String luaScript =
"redis.call('ZUNIONSTORE', KEYS[1], 1, KEYS[2], 'WEIGHTS', ARGV[1]) " +
"return 1";
writeTemplate.execute(
new org.springframework.data.redis.core.script.DefaultRedisScript<>(luaScript, Long.class),
List.of(destKey, srcKey),
String.valueOf(weight)
);
}
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

carry-over가 ZUNIONSTORE dest 1 src WEIGHTS w 형태라서 destKey에 이미 점수가 쌓인 상태에서 실행되면(스케줄러 중복 실행, 시간대/클럭 스큐, 자정 이후 재시도 등) 기존 destKey 내용을 덮어써서 초기 이벤트 점수를 유실할 수 있습니다. destKey의 기존 값과 합산되는 형태로 수행하거나(예: destKey 자체를 포함한 union) carry-over가 1회만 실행됨을 보장하는 락/멱등 처리(SETNX 등)를 추가해 주세요.

Copilot uses AI. Check for mistakes.
Comment on lines +128 to +133
- 예시

```java
rank:all:20250906 // 9월 6일 랭킹 집계
rank:all:20250907 // 9월 7일 랭킹 집계
```
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Key 예시가 rank:all:{yyyyMMdd}로 되어 있는데, 이번 PR의 구현/설계 문서에서는 ranking:all:{yyyyMMdd}를 사용합니다(코드/다른 문서와 불일치). 문서 예시를 실제 키 네이밍(ranking:all:)으로 맞춰 주세요.

Copilot uses AI. Check for mistakes.
Comment on lines +179 to +189
**Command**
```bash
docker exec redis-master redis-cli ZREVRANGE ranking:all:20260409 0 -1 WITHSCORES
```

**Output (점수 내림차순)**
```
productId=1 score=0.05138339920948616
productId=2 score=0.04292929292929293
productId=3 score=0.0043689320388349516
```
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redis CLI ZREVRANGE ... WITHSCORES의 raw 출력은 보통 멤버/점수가 줄 단위로 출력됩니다(예: 1 다음 줄에 score). 현재 문서의 productId=1 score=... 형태는 실제 커맨드 출력과 달라 재현 시 혼동될 수 있습니다. 실제 출력 형식으로 기록하거나, 사람이 보기 좋게 편집한 결과라면 '가독성을 위해 편집'임을 명시해 주세요.

Copilot uses AI. Check for mistakes.
Comment on lines +9 to +31
## [Keyword 3] 주문 score에 log를 왜 적용하는가

### 사용자의 의문
> "10만원짜리 상품 10만건 주문이 랭킹 전체를 지배하는게 문제가 될까요? 이정도로 매출에 기여를 했다면 오히려 랭킹 전체를 지배 안하는게 문제 아닌가요?"

### 결론

**log의 역할은 순위를 뒤집는 것이 아니라 격차를 압축하는 것이다.**

- `log(100,000)` ≈ 11.5 / `log(1,000)` ≈ 6.9 / `log(100)` ≈ 4.6
- 100배 차이 → log 적용 후 약 2.5배 차이로 압축
- 1위는 여전히 1위. 단, 하위 상품들도 경쟁 가능한 범위 안에 들어옴

**log 미적용 시 진짜 문제 (신호 불균형):**
- 조회 이벤트 1만 건 → `0.1 * 10,000 = 1,000점`
- 주문 1건 (10만원) → `0.7 * 100,000 = 70,000점`
- 주문 1건이 조회 70배보다 큰 점수 → view/like 가중치를 설계한 의미가 없어짐
- 결과적으로 order 지표가 score를 사실상 독점, 나머지 신호는 무의미해짐

**log 적용 후 (신호 균형):**
- 조회 이벤트 1만 건 → `0.1 * 10,000 = 1,000점`
- 주문 1건 (10만원) → `0.7 * log(100,000)` ≈ `8.05점`
- 세 지표가 서로 의미 있는 비율로 score에 기여
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

이 문서는 '주문 score에 log를 적용'하는 의사결정을 전제로 설명하고 있는데, 이번 PR의 실제 구현은 주문을 orderQtyDaily로 반영하고 saturation 기반 점수 모델을 사용합니다. 현재 구현과의 관계(대안으로 검토했던 내용인지, 적용하지 않기로 한 이유 등)를 짧게라도 명시하지 않으면 독자가 현재 시스템이 log 기반이라고 오해할 수 있습니다.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java (1)

66-86: ⚠️ Potential issue | 🟡 Minor

신규 rank 필드의 캐시 왕복 검증이 빠져 있어 회귀를 놓칠 수 있다

운영 시 캐시 직렬화 포맷이 어긋나면 PDP 응답의 순위 값이 누락/오염될 수 있는데, 현재 테스트는 rank를 검증하지 않아 조기 탐지가 어렵다. 기존 케이스에 rank == null assert를 추가하고, rank가 non-null인 DTO 왕복 케이스를 1개 추가하는 것이 안전하다. 추가 테스트로 putAndGetProductDetailOutDto_withRank()를 만들어 rank=7L이 그대로 복원되는지 검증하는 것이 좋다.

수정 예시
@@
 			assertAll(
 				() -> assertThat(result).isPresent(),
@@
-				() -> assertThat(result.get().likeCount()).isEqualTo(50L)
+				() -> assertThat(result.get().likeCount()).isEqualTo(50L),
+				() -> assertThat(result.get().rank()).isNull()
 			);
 		}
+
+		`@Test`
+		`@DisplayName`("[put() -> get()] rank 값 포함 DTO 저장 후 조회 -> rank 필드 유지")
+		void putAndGetWithRank() {
+			String key = "product:rank:1";
+			ProductDetailOutDto dto = new ProductDetailOutDto(
+				1L, 1L, "TestBrand", "TestProduct",
+				new BigDecimal("59900.00"), 100L, "description", 50L, 7L
+			);
+			productCacheManager.put(key, dto, Duration.ofMinutes(10));
+			Optional<ProductDetailOutDto> result = productCacheManager.get(key, ProductDetailOutDto.class);
+			assertThat(result).isPresent();
+			assertThat(result.get().rank()).isEqualTo(7L);
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java`
around lines 66 - 86, Existing test in ProductCacheManagerTest doesn't assert
the new ProductDetailOutDto.rank field; update the current test that calls
productCacheManager.put/get to include an assertion that result.get().rank() is
null, and add a new test method (e.g., putAndGetProductDetailOutDto_withRank)
which constructs a ProductDetailOutDto with rank=7L, calls
productCacheManager.put(key, dto, Duration.ofMinutes(10)) and then
productCacheManager.get(key, ProductDetailOutDto.class), and asserts the
Optional is present and that result.get().rank() equals 7L (alongside the
existing field assertions) to ensure rank survives cache serialization
round-trip.
🧹 Nitpick comments (13)
apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java (1)

57-82: getProductRank() 호출 검증이 누락되었다.

getProductSuccess 테스트에서 given(productQueryService.getProductRank(1L)).willReturn(null)을 설정했지만, verify(productQueryService).getProductRank(1L) 검증이 없다. getProductWithRank 테스트(Line 121)에서는 검증하므로 일관성을 위해 추가하는 것을 권장한다.

♻️ verify 추가 제안
 assertAll(
 	() -> assertThat(result.id()).isEqualTo(1L),
 	() -> assertThat(result.brandId()).isEqualTo(1L),
 	() -> assertThat(result.brandName()).isEqualTo("나이키"),
 	() -> assertThat(result.name()).isEqualTo("테스트 상품"),
 	() -> assertThat(result.price()).isEqualByComparingTo(new BigDecimal("10000")),
 	() -> assertThat(result.stock()).isEqualTo(100L),
 	() -> assertThat(result.likeCount()).isEqualTo(0L),
 	() -> assertThat(result.rank()).isNull(),
 	() -> verify(productQueryService).getOrLoadProductDetail(1L),
+	() -> verify(productQueryService).getProductRank(1L),
 	() -> verify(productQueryService).saveViewOutbox(10L, 1L)
 );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java`
around lines 57 - 82, The test getProductSuccess sets up a stub for
productQueryService.getProductRank(1L) but never verifies it; update the
ProductQueryFacadeTest.getProductSuccess to include a verification that
productQueryService.getProductRank(1L) was called (e.g., add
verify(productQueryService).getProductRank(1L)) alongside the existing verify
calls so the test consistently asserts the rank lookup behavior.
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java (2)

64-69: scoreDelta=0일 때 불필요한 Redis 호출이 발생한다.

hincrby()delta=0일 때 쓰기를 건너뛰지만, incrementScore()scoreDelta=0일 때도 ZINCRBY를 호출한다. 일관성과 불필요한 네트워크 호출 방지를 위해 동일한 최적화를 적용하는 것을 권장한다.

♻️ scoreDelta=0 최적화 제안
 `@Override`
 public void incrementScore(String dateStr, Long productId, double scoreDelta) {
+	if (scoreDelta == 0.0) return;
 	redisTemplate.opsForZSet().incrementScore(
 		ZSET_PREFIX + dateStr, productId.toString(), scoreDelta);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java`
around lines 64 - 69, The incrementScore method in RankingRedisAdapter currently
calls redisTemplate.opsForZSet().incrementScore(ZSET_PREFIX + dateStr,
productId.toString(), scoreDelta) even when scoreDelta == 0, causing unnecessary
Redis calls; add a guard at the start of incrementScore to return immediately if
scoreDelta == 0 (mirroring hincrby behavior) so no ZINCRBY is invoked for zero
deltas and unnecessary network/writes are avoided.

72-79: TTL 설정 실패 시 부분 적용 문제가 발생할 수 있다.

4개 키에 순차적으로 expire()를 호출하므로, 중간에 Redis 연결 오류 발생 시 일부 키만 TTL이 설정된다. 운영 중 키 만료 시점이 불일치하면 카운터와 ZSET 간 정합성 문제가 발생할 수 있다. Pipeline 또는 Lua script로 원자적 처리를 고려할 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java`
around lines 72 - 79, ensureTtl currently calls redisTemplate.expire
sequentially on ZSET_PREFIX + dateStr and the three COUNTER_* keys which can
leave TTLs partially applied if a Redis error occurs; change ensureTtl to
perform the four EXPIRE commands atomically using a Redis pipeline or a single
Lua script: use redisTemplate.executePipelined/execute with a RedisCallback to
enqueue expire for ZSET_PREFIX + dateStr, COUNTER_VIEW_PREFIX + dateStr,
COUNTER_LIKE_PREFIX + dateStr, COUNTER_ORDER_PREFIX + dateStr (or an EVAL that
calls redis.call('expire', key, ttl) for each key), and ensure you
capture/handle errors from the pipeline/script so failures are detected and
logged/propagated rather than leaving mixed TTL state.
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java (1)

18-20: 가중치 합계가 1.0이 아닌 0.9이다.

WEIGHT_VIEW(0.1) + WEIGHT_LIKE(0.2) + WEIGHT_ORDER(0.6) = 0.9로, SaturationScorer의 가중치 합계(1.0)와 불일치한다. 의도된 설계라면 주석으로 근거를 명시하고, 아니라면 가중치를 조정해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java`
around lines 18 - 20, The constants WEIGHT_VIEW, WEIGHT_LIKE, and WEIGHT_ORDER
in LinearScorer currently sum to 0.9 (0.1 + 0.2 + 0.6) which contradicts
SaturationScorer's total of 1.0; either adjust the weights in LinearScorer so
they sum to 1.0 (e.g., change WEIGHT_ORDER to 0.7 or normalize the three values)
or add a clear comment in LinearScorer explaining the intentional 0.9 total and
the rationale for deviating from SaturationScorer, referencing the constants
WEIGHT_VIEW, WEIGHT_LIKE, WEIGHT_ORDER and the SaturationScorer behavior so
reviewers understand the choice.
apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java (1)

15-19: outDto.content()가 null일 경우 NPE 발생 가능성이 있다.

상위 레이어에서 항상 non-null list를 보장한다면 문제없으나, 방어적 코딩 관점에서 null 체크를 추가하는 것을 권장한다.

방어적 null 처리 예시
 	public static RankingPageResponse from(RankingPageOutDto outDto) {
-		List<RankingItemResponse> items = outDto.content().stream()
+		List<RankingItemResponse> items = outDto.content() == null
+			? List.of()
+			: outDto.content().stream()
 			.map(RankingItemResponse::from)
 			.toList();
 		return new RankingPageResponse(items, outDto.page(), outDto.size(), outDto.totalElements());
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java`
around lines 15 - 19, The static factory RankingPageResponse.from currently maps
outDto.content() directly which can NPE if content is null; update from to
defensively handle a null content by treating it as an empty list before mapping
with RankingItemResponse::from (e.g., replace direct outDto.content().stream()
with a null-safe expression such as
Optional.ofNullable(outDto.content()).orElse(Collections.emptyList()) or
Collections.emptyList() then .stream()), then proceed to build and return the
new RankingPageResponse(items, outDto.page(), outDto.size(),
outDto.totalElements()).
apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java (1)

31-31: cron 표현식에 타임존이 명시되지 않았다.

서버 기본 타임존에 의존하므로, 분산 환경에서 일관성을 위해 zone 속성을 명시하는 것을 권장한다.

타임존 명시 예시
-	`@Scheduled`(cron = "0 50 23 * * *")
+	`@Scheduled`(cron = "0 50 23 * * *", zone = "Asia/Seoul")
 	public void execute() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java`
at line 31, The `@Scheduled`(cron = "0 50 23 * * *") on RankingCarryOverScheduler
relies on the server default timezone; update the Scheduled annotation to
include a zone attribute (e.g., zone = "Asia/Seoul" or your team's canonical
timezone) so the cron triggers consistently across environments. Locate the
`@Scheduled` annotation in the RankingCarryOverScheduler class and add the zone
property to the annotation with the agreed IANA timezone string.
apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java (1)

31-31: Redis 전용 메서드에 @Transactional(readOnly = true)이 불필요하다.

getProductRank()는 Redis 조회만 수행하며 JPA/DB 작업이 없다. @Transactional은 불필요한 트랜잭션 오버헤드를 발생시키므로 제거하는 것을 권장한다.

@transactional 제거
 	// 1. 특정 상품의 랭킹 순위 조회
-	`@Transactional`(readOnly = true)
 	public Long getProductRank(String dateStr, Long productId) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java`
at line 31, The getProductRank() method in RankingRankService is annotated with
`@Transactional`(readOnly = true) but only performs Redis reads; remove the
`@Transactional`(readOnly = true) annotation from the getProductRank() method to
avoid unnecessary transaction overhead, leaving the method as a plain
(non-transactional) method in RankingRankService.
apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java (1)

39-50: 자정 근처 테스트 실행 시 플래키(flaky) 가능성이 있다.

LocalDate.now() 호출이 테스트 코드와 실제 서비스 코드 사이에서 날짜가 바뀔 수 있다. 고정된 시간을 주입하거나 Clock 추상화를 사용하는 것을 권장한다.

추가로 다음 테스트 케이스를 고려할 수 있다:

  • Redis 연결 실패 시 예외 전파 여부
  • carryOver 성공 후 setTtl 실패 케이스
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java`
around lines 39 - 50, The test is flaky because it uses LocalDate.now()
directly; change RankingCarryOverService to accept a java.time.Clock (e.g., via
constructor) and use LocalDate.now(clock) inside carryOver(), then update the
test to construct a fixed Clock (Clock.fixed(...)) so todayStr and tomorrowStr
are deterministic and verify calls against rankingCarryOverService.carryOver();
additionally add separate tests that simulate rankingRedisPort.carryOver
throwing (to assert exception propagation) and simulate carryOver succeeding but
rankingRedisPort.setTtl throwing (to assert handling) using the same fixed
Clock.
apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java (1)

36-37: LocalDate.now() 사용 시 시스템 기본 타임존에 의존한다.

분산 환경에서 서버 간 타임존이 다를 경우 키 불일치가 발생할 수 있다. 명시적 타임존(예: ZoneId.of("Asia/Seoul"))을 사용하거나, 애플리케이션 시작 시 타임존을 설정하는 것을 권장한다.

타임존 명시 예시
+	private static final ZoneId ZONE_ID = ZoneId.of("Asia/Seoul");
+
 	public void carryOver() {
-		LocalDate today = LocalDate.now();
+		LocalDate today = LocalDate.now(ZONE_ID);
 		LocalDate tomorrow = today.plusDays(1);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java`
around lines 36 - 37, The code in RankingCarryOverService uses LocalDate.now()
which depends on the JVM default timezone and can cause key mismatches across
servers; change to obtain dates from an explicit timezone or injected Clock
(e.g., use LocalDate.now(ZoneId.of("Asia/Seoul")) or better inject a Clock and
call LocalDate.now(clock)) so all instances compute the same today/tomorrow
values; update the two usages (today and tomorrow = today.plusDays(1))
accordingly and add a constructor/bean to provide the Clock or a configurable
ZoneId.
apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java (1)

50-97: isBlank() 날짜 분기 테스트가 빠져 있어 회귀를 놓칠 수 있다

운영 관점에서 공백 쿼리 파라미터는 실제로 자주 유입되며, 현재는 null 경로만 검증되어 Line 65의 공백 분기 회귀를 조기에 탐지하기 어렵다. dateStr = " " 입력 시 today 키를 사용하는 케이스를 추가해 분기 안정성을 보강하는 것이 좋다. 추가 테스트로 blankDateUsesTodayKey()를 만들어 getTopProducts(TODAY, offset, size) 호출을 검증하면 된다.

As per coding guidelines: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java`
around lines 50 - 97, Add a unit test that covers the blank-string date branch
so isBlank() behavior doesn't regress: create a test method (e.g.,
blankDateUsesTodayKey) that calls rankingQueryService.getRankedProductEntries(" 
", 0, 20) and verify that rankingRedisPort.getTopProducts(TODAY, 0, 20) and
rankingRedisPort.getZSetSize(TODAY) are invoked; this mirrors the existing null
test path and ensures the isBlank() branch is exercised alongside
getRankedProductEntries, TODAY, and rankingRedisPort interactions.
apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java (1)

42-93: 공백 날짜 입력 분기 테스트를 추가해야 한다

운영 관점에서 date 파라미터가 공백으로 들어오는 경우가 있으며, 현재 테스트는 null/명시 날짜만 검증해 Line 65의 isBlank() 경로 회귀를 놓칠 수 있다. getProductRank(" ", productId)가 today 키로 조회되는지 검증하는 케이스를 추가하는 것이 좋다. 추가 테스트로 blankDateUsesTodayKey()를 넣어 Redis 포트 호출 키를 검증하면 된다.

As per coding guidelines: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java`
around lines 42 - 93, Add a unit test blankDateUsesTodayKey() that verifies the
blank-date branch of getProductRank: arrange by stubbing
rankingRedisPort.getReversedRank(TODAY, <productId>) to return a sample value,
call rankingRankService.getProductRank("   ", <productId>), and assert the
returned rank is converted correctly (returned value + 1) and that
rankingRedisPort.getReversedRank was invoked with the TODAY key; target the
getProductRank method and the rankingRedisPort.getReversedRank(TODAY, ...)
interaction to cover the isBlank() path.
apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java (1)

93-153: 음수 scoreDelta 경로 테스트를 추가해야 한다

Line 93~153은 scoreDelta > 0scoreDelta == 0만 검증한다. 운영 관점에서 unlike/취소 이벤트가 누적되면 점수 하락(음수 ZINCRBY)이 발생할 수 있는데, 이 경로가 빠지면 회귀 시 랭킹이 실제보다 높게 유지되는 장애로 이어질 수 있다. 수정안은 old > new 카운터를 반환하도록 목킹해 incrementScore(..., negative) 호출을 검증하는 케이스를 추가하는 것이다. 추가 테스트로 clamp 적용 후 newScore < oldScore가 되는 시나리오(예: like 5→4, order 2→1)를 넣어 음수 증분 호출을 고정해야 한다.

테스트 추가 예시
+		`@Test`
+		`@DisplayName`("[applyDeltas()] scoreDelta<0 -> 음수 ZINCRBY 호출")
+		void negativeScoreDelta() {
+			Long productId = 1L;
+			Map<Long, long[]> deltas = Map.of(productId, new long[]{0, -1, 0});
+			Set<String> eventIds = Set.of("evt-1");
+
+			given(rankingRedisPort.incrementCounterAndGetCounts(anyString(), eq(productId), eq(0L), eq(-1L), eq(0L)))
+				.willReturn(new CounterResult(10, 10, 5, 4, 2, 2));
+
+			rankingScoreService.applyDeltas(deltas, eventIds);
+
+			verify(rankingRedisPort).incrementScore(anyString(), eq(productId), doubleThat(d -> d < 0));
+		}

As per coding guidelines **/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java`
around lines 93 - 153, Add a unit test in RankingScoreServiceTest to cover the
negative scoreDelta path: when
rankingRedisPort.incrementCounterAndGetCounts(...) returns counts where old >
new (so scoreDelta < 0) the RankingScoreService.applyDeltas(...) should call
rankingRedisPort.incrementScore(...) with a negative double; create a test
(mirror existing singleProductDelta/zeroScoreDelta) that stubs
incrementCounterAndGetCounts to return a CounterResult causing newScore <
oldScore (e.g., like 5→4, order 2→1), invoke applyDeltas with appropriate deltas
and eventIds, then verify incrementScore(anyString(), eq(productId),
doubleThat(d -> d < 0)) and ensure eventIdempotencyService.markHandledBatch(...)
and ensureTtl(...) behavior as applicable.
apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java (1)

145-166: date 파라미터의 실패 케이스(포맷 오류) 검증이 필요하다

Line 145~166은 정상 날짜 조회만 확인한다. 운영 관점에서 잘못된 입력이 200으로 통과하거나 에러 포맷이 흔들리면 클라이언트 재시도/모니터링 지표가 왜곡된다. 수정안은 유효하지 않은 날짜(2026-04-09, 20261301 등) 요청 시 400과 표준 에러 응답 구조를 검증하는 테스트를 추가하는 것이다. 추가 테스트로 허용 범위를 벗어난 값과 null/빈 문자열 각각을 분리해 검증해야 한다.

As per coding guidelines **/*Test*.java: 단위/통합 테스트는 실패 케이스와 예외 흐름을 포함하는지 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java`
around lines 145 - 166, Add negative tests in RankingQueryControllerE2ETest to
cover invalid date inputs: create new test methods (e.g.,
rankingsWithInvalidDateParam_returnsBadRequest,
rankingsWithEmptyOrNullDate_returnsBadRequest) that call
mockMvc.perform(get("/api/v1/rankings").param("date", "...")) with malformed
values ("20261301", "2026-04-09", "", and omit param/null) and assert
status().isBadRequest() and the standard error response structure (use jsonPath
assertions against the controller's error fields). Place these tests alongside
rankingsWithDateParam so they exercise the same controller path and validate
both format errors and empty/missing date cases. Ensure test names and
assertions reference the existing mockMvc usage and JSON error shape used
elsewhere in this test class.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.http/ranking.http:
- Around line 19-21: Replace the hardcoded credential headers in the .http
request (X-Loopers-LoginId and X-Loopers-LoginPw) with environment-variable
placeholders (e.g., {{LOOPERS_LOGIN_ID}} and {{LOOPERS_LOGIN_PW}}) and leave
their default values empty so no real credentials are stored in the file; update
the GET request header lines to reference those placeholders and ensure any
README or contributing docs show how to set these env vars locally/CI, and add a
CI secret-scanning step (e.g., gitleaks/trufflehog) to block credential-pattern
commits.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`:
- Around line 187-193: The getProductRank method currently logs the full
exception stacktrace on every ranking failure; change it to log only a concise
message with e.getMessage() (or e.getCause() string) and remove the throwable
from log.warn, increment a dedicated failure counter metric (e.g.,
productRankFailureCounter.increment() via your metrics registry) inside the
catch, and keep returning null as the fallback; update productRankingReader
usage to ensure timeouts/retries/circuit-breaker are considered per guidelines
and add a unit/integration test that simulates repeated exceptions from
productRankingReader.getProductRank(productId) asserting the method returns null
and that the failure metric was incremented accordingly.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java`:
- Around line 33-35: getProductRank currently uses LocalDate.now() which depends
on the system default timezone and can produce inconsistent yyyyMMdd keys across
nodes; change getProductRank to obtain the date from an injected Clock (or a
shared ZoneId-backed Clock) instead of LocalDate.now(), format via
LocalDate.now(clock).format(DATE_FORMAT), and pass that stable date string to
rankingRankFacade.getProductRank; update the constructor/bean to accept a Clock
(or use existing common timezone provider) and add unit tests for getProductRank
covering just-before-midnight and just-after-midnight boundary cases using fixed
Clocks to assert the expected date key is produced.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java`:
- Around line 44-46: The two-step sequence calling
rankingRedisPort.carryOver(todayStr, tomorrowStr, CARRY_OVER_WEIGHT) followed by
rankingRedisPort.setTtl(tomorrowStr, TTL) is non-atomic and can leave
tomorrowStr without TTL if setTtl fails; change the implementation to perform
ZUNIONSTORE + EXPIRE atomically inside Redis by adding a new method (e.g.,
rankingRedisPort.carryOverWithTtl(todayStr, tomorrowStr, CARRY_OVER_WEIGHT,
TTL)) that executes a Lua script (EVAL) performing ZUNIONSTORE then EXPIRE in
one transaction, and update callers to use carryOverWithTtl; alternatively, if
you prefer not to add Lua, add robust retry and alerting in setTtl (retry
backoff + monitoring/logging) and ensure carryOver surfaces errors so retries
run.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java`:
- Around line 40-43: The getRankedProductEntries method in RankingQueryService
currently computes offset without validating inputs, allowing negative or zero
values; add input validation at the start of getRankedProductEntries to enforce
page >= 0 and size > 0 and throw a unified CoreException (or a specific
CoreException subclass) with a clear message code when violated so
ApiControllerAdvice will produce the consistent error response; update any
callers/tests and add unit tests verifying that page = -1 and size = 0 produce
the CoreException with the expected message codes.
- Line 22: The code uses LocalDate.now() in RankingQueryService (with
DATE_FORMAT = DateTimeFormatter.BASIC_ISO_DATE) which makes date-key computation
depend on the system timezone; change the service to accept and use a
java.time.Clock (injectable in constructor or via setter) and compute
LocalDate.now(clock).atZone(ZoneId) or LocalDate.now(clock.withZone(fixedZone))
so all nodes use a consistent ZoneId when producing the yyyyMMdd key; update any
methods referencing LocalDate.now() to use the injected Clock and add a unit
test using Clock.fixed(...) around the midnight boundary to verify correct key
calculation.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/acl/catalog/RankingProductReaderImpl.java`:
- Around line 31-36: The readProducts method can return DTOs in a different
order than the input productIds; fix RankingProductReaderImpl.readProducts by
building a Map<Long,ProductCacheDto> from
productQueryFacade.findCacheDtosByIds(productIds) (keyed by dto.id()), then
iterate the original productIds list and for each id lookup the dto and create a
RankingProductInfo(dto.id(), dto.name(), dto.price(), dto.brandName()) in that
exact order; handle missing ids deterministically (e.g., skip and log or throw
an IllegalStateException) and add tests to verify readProducts preserves input
order even when findCacheDtosByIds returns reverse/partial results.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java`:
- Around line 32-42: The controller method in RankingQueryController does not
validate the date request parameter before calling
rankingQueryFacade.getRankings; add validation that the optional date (String
date) either matches DateTimeFormatter.BASIC_ISO_DATE (yyyyMMdd) or is
null/empty, and if parsing fails throw a 400 Bad Request (e.g., throw new
ResponseStatusException(HttpStatus.BAD_REQUEST, "invalid date format")) so only
validated date strings reach rankingQueryFacade.getRankings; perform this check
immediately before computing safePage/safeSize or before invoking
rankingQueryFacade.getRankings to ensure invalid inputs are rejected early.

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java`:
- Around line 548-563: Add two unit tests in the ProductQueryServiceTest
GetProductRankTest nested class to cover the happy path and the "not registered"
boundary: mock productRankingReader.getProductRank(1L) to return a valid Long
(e.g., 42L) and assert productQueryService.getProductRank(1L) returns that same
value, and mock productRankingReader.getProductRank(2L) to return null and
assert productQueryService.getProductRank(2L) is null; this complements the
existing exception-fallback test and uses the same mocking style
(given(...).willReturn(...)) targeting the getProductRank() method on
productRankingReader and asserting results from
ProductQueryService.getProductRank().

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java`:
- Around line 69-70: Remove the static TODAY and ZSET_KEY constants in
RankingQueryControllerE2ETest and replace them with a non-static helper method
(e.g., getZsetKeyFor(LocalDate date) or getTodayZsetKey()) that computes the key
at test runtime using LocalDate.now() (or an injected Clock) so each test
computes the current key when it runs; update all usages to call that helper
instead of the static fields, and add an additional test case that injects a
fixed Clock to simulate times immediately before/after midnight to verify
behavior across the date boundary.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java`:
- Around line 49-50: The current use of LocalDate.now() in RankingScoreService
(where dateStr = LocalDate.now().format(DATE_FORMAT)) is timezone-dependent;
change the service to accept a Clock (and/or a ZoneId) via constructor or
dependency injection and use LocalDate.now(clock.withZone(...)) or
LocalDate.now(zone) so the date key is derived from a fixed business timezone
rather than the JVM default; update any usages of DATE_FORMAT accordingly and
add a unit test that constructs the service with a fixed Clock (e.g., UTC
instant that maps to different local dates in KST) to assert the same yyyyMMdd
key is produced in the boundary case.
- Around line 73-80: The call to markEventHandled(...) from the same class
bypasses Spring proxying so the `@Transactional` on markEventHandled(...) (which
calls eventIdempotencyService.markHandledBatch(...)) is not applied; refactor by
extracting markEventHandled(...) into a separate `@Service/bean` (e.g.,
EventIdempotencyTransactionService) or obtain the proxy
(ApplicationContext.getBean(RankingScoreService.class) or the new bean) and
invoke the proxied method so the transaction boundary is honored, update the
caller to use that bean/proxy instead of this.self-call, and add an integration
test verifying that partial failure inside markHandledBatch(...) triggers
rollback of the idempotency batch as expected.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java`:
- Around line 28-34: LinearScorer.calculateScore currently allows negative
view/like/order values and can produce negative scores; clamp each input to zero
before computing scaled scores (e.g., normalize using max(0, value) then apply
the existing Math.min(..., value / NORM_* ) logic) so
viewScore/likeScore/orderScore never become negative; update calculateScore in
class LinearScorer to use non-negative inputs (referencing WEIGHT_VIEW,
NORM_VIEW, WEIGHT_LIKE, NORM_LIKE, WEIGHT_ORDER, NORM_ORDER) and return the sum
as before.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java`:
- Around line 45-55: The consume method currently fails the entire batch on a
single bad record; change it to per-record exception isolation by wrapping each
record deserialization/processing in its own try-catch inside consume (handle
JSON/objectMapper.readValue failures and any processing errors) and skip the bad
record while collecting successful envelopes and eventIds, then commit offsets
for processed records; similarly, update aggregateDelta to perform per-payload
deserialization/processing try-catch around the sections that call payload
parsing (the parts you noted at aggregateDelta, lines ~96/101/106/111) so a
single malformed delta is skipped and does not abort the whole aggregation, and
when skipping a record throw or record/collect a BatchListenerFailedException
only for that record to allow the consumer to continue with others and ensure
tests cover "1 bad + 1 good" batch yields the good delta applied and offsets
committed.

---

Outside diff comments:
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java`:
- Around line 66-86: Existing test in ProductCacheManagerTest doesn't assert the
new ProductDetailOutDto.rank field; update the current test that calls
productCacheManager.put/get to include an assertion that result.get().rank() is
null, and add a new test method (e.g., putAndGetProductDetailOutDto_withRank)
which constructs a ProductDetailOutDto with rank=7L, calls
productCacheManager.put(key, dto, Duration.ofMinutes(10)) and then
productCacheManager.get(key, ProductDetailOutDto.class), and asserts the
Optional is present and that result.get().rank() equals 7L (alongside the
existing field assertions) to ensure rank survives cache serialization
round-trip.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java`:
- Line 31: The `@Scheduled`(cron = "0 50 23 * * *") on RankingCarryOverScheduler
relies on the server default timezone; update the Scheduled annotation to
include a zone attribute (e.g., zone = "Asia/Seoul" or your team's canonical
timezone) so the cron triggers consistently across environments. Locate the
`@Scheduled` annotation in the RankingCarryOverScheduler class and add the zone
property to the annotation with the agreed IANA timezone string.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java`:
- Around line 36-37: The code in RankingCarryOverService uses LocalDate.now()
which depends on the JVM default timezone and can cause key mismatches across
servers; change to obtain dates from an explicit timezone or injected Clock
(e.g., use LocalDate.now(ZoneId.of("Asia/Seoul")) or better inject a Clock and
call LocalDate.now(clock)) so all instances compute the same today/tomorrow
values; update the two usages (today and tomorrow = today.plusDays(1))
accordingly and add a constructor/bean to provide the Clock or a configurable
ZoneId.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java`:
- Line 31: The getProductRank() method in RankingRankService is annotated with
`@Transactional`(readOnly = true) but only performs Redis reads; remove the
`@Transactional`(readOnly = true) annotation from the getProductRank() method to
avoid unnecessary transaction overhead, leaving the method as a plain
(non-transactional) method in RankingRankService.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java`:
- Around line 15-19: The static factory RankingPageResponse.from currently maps
outDto.content() directly which can NPE if content is null; update from to
defensively handle a null content by treating it as an empty list before mapping
with RankingItemResponse::from (e.g., replace direct outDto.content().stream()
with a null-safe expression such as
Optional.ofNullable(outDto.content()).orElse(Collections.emptyList()) or
Collections.emptyList() then .stream()), then proceed to build and return the
new RankingPageResponse(items, outDto.page(), outDto.size(),
outDto.totalElements()).

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java`:
- Around line 57-82: The test getProductSuccess sets up a stub for
productQueryService.getProductRank(1L) but never verifies it; update the
ProductQueryFacadeTest.getProductSuccess to include a verification that
productQueryService.getProductRank(1L) was called (e.g., add
verify(productQueryService).getProductRank(1L)) alongside the existing verify
calls so the test consistently asserts the rank lookup behavior.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java`:
- Around line 39-50: The test is flaky because it uses LocalDate.now() directly;
change RankingCarryOverService to accept a java.time.Clock (e.g., via
constructor) and use LocalDate.now(clock) inside carryOver(), then update the
test to construct a fixed Clock (Clock.fixed(...)) so todayStr and tomorrowStr
are deterministic and verify calls against rankingCarryOverService.carryOver();
additionally add separate tests that simulate rankingRedisPort.carryOver
throwing (to assert exception propagation) and simulate carryOver succeeding but
rankingRedisPort.setTtl throwing (to assert handling) using the same fixed
Clock.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java`:
- Around line 50-97: Add a unit test that covers the blank-string date branch so
isBlank() behavior doesn't regress: create a test method (e.g.,
blankDateUsesTodayKey) that calls rankingQueryService.getRankedProductEntries(" 
", 0, 20) and verify that rankingRedisPort.getTopProducts(TODAY, 0, 20) and
rankingRedisPort.getZSetSize(TODAY) are invoked; this mirrors the existing null
test path and ensures the isBlank() branch is exercised alongside
getRankedProductEntries, TODAY, and rankingRedisPort interactions.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java`:
- Around line 42-93: Add a unit test blankDateUsesTodayKey() that verifies the
blank-date branch of getProductRank: arrange by stubbing
rankingRedisPort.getReversedRank(TODAY, <productId>) to return a sample value,
call rankingRankService.getProductRank("   ", <productId>), and assert the
returned rank is converted correctly (returned value + 1) and that
rankingRedisPort.getReversedRank was invoked with the TODAY key; target the
getProductRank method and the rankingRedisPort.getReversedRank(TODAY, ...)
interaction to cover the isBlank() path.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java`:
- Around line 145-166: Add negative tests in RankingQueryControllerE2ETest to
cover invalid date inputs: create new test methods (e.g.,
rankingsWithInvalidDateParam_returnsBadRequest,
rankingsWithEmptyOrNullDate_returnsBadRequest) that call
mockMvc.perform(get("/api/v1/rankings").param("date", "...")) with malformed
values ("20261301", "2026-04-09", "", and omit param/null) and assert
status().isBadRequest() and the standard error response structure (use jsonPath
assertions against the controller's error fields). Place these tests alongside
rankingsWithDateParam so they exercise the same controller path and validate
both format errors and empty/missing date cases. Ensure test names and
assertions reference the existing mockMvc usage and JSON error shape used
elsewhere in this test class.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java`:
- Around line 64-69: The incrementScore method in RankingRedisAdapter currently
calls redisTemplate.opsForZSet().incrementScore(ZSET_PREFIX + dateStr,
productId.toString(), scoreDelta) even when scoreDelta == 0, causing unnecessary
Redis calls; add a guard at the start of incrementScore to return immediately if
scoreDelta == 0 (mirroring hincrby behavior) so no ZINCRBY is invoked for zero
deltas and unnecessary network/writes are avoided.
- Around line 72-79: ensureTtl currently calls redisTemplate.expire sequentially
on ZSET_PREFIX + dateStr and the three COUNTER_* keys which can leave TTLs
partially applied if a Redis error occurs; change ensureTtl to perform the four
EXPIRE commands atomically using a Redis pipeline or a single Lua script: use
redisTemplate.executePipelined/execute with a RedisCallback to enqueue expire
for ZSET_PREFIX + dateStr, COUNTER_VIEW_PREFIX + dateStr, COUNTER_LIKE_PREFIX +
dateStr, COUNTER_ORDER_PREFIX + dateStr (or an EVAL that calls
redis.call('expire', key, ttl) for each key), and ensure you capture/handle
errors from the pipeline/script so failures are detected and logged/propagated
rather than leaving mixed TTL state.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java`:
- Around line 18-20: The constants WEIGHT_VIEW, WEIGHT_LIKE, and WEIGHT_ORDER in
LinearScorer currently sum to 0.9 (0.1 + 0.2 + 0.6) which contradicts
SaturationScorer's total of 1.0; either adjust the weights in LinearScorer so
they sum to 1.0 (e.g., change WEIGHT_ORDER to 0.7 or normalize the three values)
or add a clear comment in LinearScorer explaining the intentional 0.9 total and
the rationale for deviating from SaturationScorer, referencing the constants
WEIGHT_VIEW, WEIGHT_LIKE, WEIGHT_ORDER and the SaturationScorer behavior so
reviewers understand the choice.

In
`@apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java`:
- Around line 93-153: Add a unit test in RankingScoreServiceTest to cover the
negative scoreDelta path: when
rankingRedisPort.incrementCounterAndGetCounts(...) returns counts where old >
new (so scoreDelta < 0) the RankingScoreService.applyDeltas(...) should call
rankingRedisPort.incrementScore(...) with a negative double; create a test
(mirror existing singleProductDelta/zeroScoreDelta) that stubs
incrementCounterAndGetCounts to return a CounterResult causing newScore <
oldScore (e.g., like 5→4, order 2→1), invoke applyDeltas with appropriate deltas
and eventIds, then verify incrementScore(anyString(), eq(productId),
doubleThat(d -> d < 0)) and ensure eventIdempotencyService.markHandledBatch(...)
and ensureTtl(...) behavior as applicable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 49af33ca-b5e9-4555-a796-976afbbd3ba2

📥 Commits

Reviewing files that changed from the base of the PR and between 1efb985 and 929c086.

⛔ Files ignored due to path filters (19)
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductDetailOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/cache/dto/ProductCacheDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/client/ranking/ProductRankingReader.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankedResult.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankingItemOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankingPageOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/ProductScoreEntry.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/RankingRedisPort.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/client/catalog/RankingProductInfo.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/client/catalog/RankingProductReader.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/CounterResult.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingRedisPort.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingScorer.java is excluded by !**/out/** and included by **
  • round9-docs/00-requirements.md is excluded by !**/*.md and included by **
  • round9-docs/01-information.md is excluded by !**/*.md and included by **
  • round9-docs/02-study-data.md is excluded by !**/*.md and included by **
  • round9-docs/03-study-deepdive.md is excluded by !**/*.md and included by **
  • round9-docs/04-decisions.md is excluded by !**/*.md and included by **
  • round9-docs/05-e2e-verification.md is excluded by !**/*.md and included by **
📒 Files selected for processing (37)
  • .http/ranking.http
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/interfaces/web/response/ProductDetailResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingQueryFacade.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRankFacade.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/acl/catalog/RankingProductReaderImpl.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingItemResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/facade/RankingQueryFacadeTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/scheduler/RankingCarryOverSchedulerTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java
  • apps/commerce-api/src/test/resources/application-test.yml
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/ConversionScorer.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/SaturationScorer.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapterTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/scorer/ScorerComparisonTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumerTest.java

Comment thread .http/ranking.http
Comment on lines +19 to +21
GET http://localhost:8080/api/v1/products/1
X-Loopers-LoginId: testuser
X-Loopers-LoginPw: Test1234!@
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

요청 샘플에 자격증명 형태 값을 하드코딩하면 보안 사고 표면이 커진다

운영 관점에서 .http 파일의 하드코딩 자격증명은 실계정 재사용/유출 사고로 이어지기 쉽다. X-Loopers-LoginId, X-Loopers-LoginPw는 환경 변수 플레이스홀더로 치환하고, 기본값은 비워두는 방식으로 바꾸는 것이 안전하다. 추가 테스트로 CI에 secret scanning(예: gitleaks/trufflehog) 규칙을 넣어 자격증명 패턴 커밋을 차단하는 것을 권장한다.

수정 예시
 ### 상품 상세 조회 (랭킹 순위 포함 — 로그인)
 GET http://localhost:8080/api/v1/products/1
-X-Loopers-LoginId: testuser
-X-Loopers-LoginPw: Test1234!@
+X-Loopers-LoginId: {{LOOPERS_LOGIN_ID}}
+X-Loopers-LoginPw: {{LOOPERS_LOGIN_PW}}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
GET http://localhost:8080/api/v1/products/1
X-Loopers-LoginId: testuser
X-Loopers-LoginPw: Test1234!@
GET http://localhost:8080/api/v1/products/1
X-Loopers-LoginId: {{LOOPERS_LOGIN_ID}}
X-Loopers-LoginPw: {{LOOPERS_LOGIN_PW}}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.http/ranking.http around lines 19 - 21, Replace the hardcoded credential
headers in the .http request (X-Loopers-LoginId and X-Loopers-LoginPw) with
environment-variable placeholders (e.g., {{LOOPERS_LOGIN_ID}} and
{{LOOPERS_LOGIN_PW}}) and leave their default values empty so no real
credentials are stored in the file; update the GET request header lines to
reference those placeholders and ensure any README or contributing docs show how
to set these env vars locally/CI, and add a CI secret-scanning step (e.g.,
gitleaks/trufflehog) to block credential-pattern commits.

Comment on lines +187 to +193
public Long getProductRank(Long productId) {
try {
return productRankingReader.getProductRank(productId);
} catch (Exception e) {
log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null 반환. productId={}", productId, e);
return null;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

랭킹 장애 시 요청당 스택트레이스 WARN 로깅은 로그 폭주로 2차 장애를 만들 수 있다

현재 구현은 fallback 자체는 적절하지만, 운영 장애 상황에서 트래픽 비례로 WARN+stacktrace가 누적되어 디스크 I/O와 관측 비용이 급증한다. 경고 로그는 요약(cause 문자열) 중심으로 줄이고, 별도 카운터 메트릭으로 장애율을 추적하는 방식이 안전하다. 추가 테스트로 랭킹 조회 예외 반복 발생 시 fallback은 유지되고(응답 null), 메트릭 증가 로직이 호출되는지 검증해야 한다.

수정 예시
 	public Long getProductRank(Long productId) {
 		try {
 			return productRankingReader.getProductRank(productId);
 		} catch (Exception e) {
-			log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null 반환. productId={}", productId, e);
+			log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null 반환. productId={}, cause={}",
+				productId, e.toString());
+			// meterRegistry.counter("catalog.product.rank.fallback").increment();
 			return null;
 		}
 	}

As per coding guidelines **/*Service*.java: 외부 호출에는 타임아웃/재시도/서킷브레이커 고려 여부를 점검하고, 실패 시 대체 흐름을 제안한다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public Long getProductRank(Long productId) {
try {
return productRankingReader.getProductRank(productId);
} catch (Exception e) {
log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null 반환. productId={}", productId, e);
return null;
}
public Long getProductRank(Long productId) {
try {
return productRankingReader.getProductRank(productId);
} catch (Exception e) {
log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null 반환. productId={}, cause={}",
productId, e.toString());
// meterRegistry.counter("catalog.product.rank.fallback").increment();
return null;
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`
around lines 187 - 193, The getProductRank method currently logs the full
exception stacktrace on every ranking failure; change it to log only a concise
message with e.getMessage() (or e.getCause() string) and remove the throwable
from log.warn, increment a dedicated failure counter metric (e.g.,
productRankFailureCounter.increment() via your metrics registry) inside the
catch, and keep returning null as the fallback; update productRankingReader
usage to ensure timeouts/retries/circuit-breaker are considered per guidelines
and add a unit/integration test that simulates repeated exceptions from
productRankingReader.getProductRank(productId) asserting the method returns null
and that the failure metric was incremented accordingly.

Comment on lines +33 to +35
public Long getProductRank(Long productId) {
String todayDateStr = LocalDate.now().format(DATE_FORMAT);
return rankingRankFacade.getProductRank(todayDateStr, productId);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

시스템 기본 타임존 의존으로 자정 경계에서 날짜 키 불일치가 발생할 수 있다

운영 환경의 노드별 타임존 설정이 다르면 같은 시각에 서로 다른 yyyyMMdd 키를 조회해 랭킹이 간헐적으로 null이 되는 장애가 발생할 수 있다. Clock(또는 공통 ZoneId 설정) 주입으로 날짜 기준을 고정해야 한다. 추가 테스트로 고정 Clock을 사용해 자정 직전/직후 경계 케이스에서 기대 날짜 키가 생성되는지 검증하는 것이 필요하다.

수정 예시
 import org.springframework.stereotype.Component;

+import java.time.Clock;
 import java.time.LocalDate;
 import java.time.format.DateTimeFormatter;
@@
 	private static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.BASIC_ISO_DATE;
+	private final Clock clock;
@@
 	`@Override`
 	public Long getProductRank(Long productId) {
-		String todayDateStr = LocalDate.now().format(DATE_FORMAT);
+		String todayDateStr = LocalDate.now(clock).format(DATE_FORMAT);
 		return rankingRankFacade.getProductRank(todayDateStr, productId);
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java`
around lines 33 - 35, getProductRank currently uses LocalDate.now() which
depends on the system default timezone and can produce inconsistent yyyyMMdd
keys across nodes; change getProductRank to obtain the date from an injected
Clock (or a shared ZoneId-backed Clock) instead of LocalDate.now(), format via
LocalDate.now(clock).format(DATE_FORMAT), and pass that stable date string to
rankingRankFacade.getProductRank; update the constructor/bean to accept a Clock
(or use existing common timezone provider) and add unit tests for getProductRank
covering just-before-midnight and just-after-midnight boundary cases using fixed
Clocks to assert the expected date key is produced.

Comment on lines +44 to +46
// ZUNIONSTORE + EXPIRE
rankingRedisPort.carryOver(todayStr, tomorrowStr, CARRY_OVER_WEIGHT);
rankingRedisPort.setTtl(tomorrowStr, TTL);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

carryOver와 setTtl이 원자적으로 실행되지 않아 TTL 누락 가능성이 있다.

carryOver() 성공 후 setTtl() 실패 시, tomorrowStr 키가 TTL 없이 존재하게 되어 메모리 누수 위험이 있다. Lua 스크립트로 ZUNIONSTORE와 EXPIRE를 원자적으로 실행하거나, setTtl() 실패 시 재시도/알림 로직을 추가하는 것을 권장한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java`
around lines 44 - 46, The two-step sequence calling
rankingRedisPort.carryOver(todayStr, tomorrowStr, CARRY_OVER_WEIGHT) followed by
rankingRedisPort.setTtl(tomorrowStr, TTL) is non-atomic and can leave
tomorrowStr without TTL if setTtl fails; change the implementation to perform
ZUNIONSTORE + EXPIRE atomically inside Redis by adding a new method (e.g.,
rankingRedisPort.carryOverWithTtl(todayStr, tomorrowStr, CARRY_OVER_WEIGHT,
TTL)) that executes a Lua script (EVAL) performing ZUNIONSTORE then EXPIRE in
one transaction, and update callers to use carryOverWithTtl; alternatively, if
you prefer not to add Lua, add robust retry and alerting in setTtl (retry
backoff + monitoring/logging) and ensure carryOver surfaces errors so retries
run.

@RequiredArgsConstructor
public class RankingQueryService {

private static final DateTimeFormatter DATE_FORMAT = DateTimeFormatter.BASIC_ISO_DATE;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java | head -80

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2647


🏁 Script executed:

# 1. 이 서비스를 호출하는 컨트롤러/파사드 찾기
rg "getRankedProductEntries" --type java -A 5

# 2. 프로젝트에서 Clock 사용 패턴 확인
rg "Clock" --type java

# 3. 페이지네이션 검증 패턴 확인
rg "page.*size|offset.*validation" --type java -i

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 41848


🏁 Script executed:

cat -n apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1862


시스템 기본 시간대에 종속된 일자 키 계산으로 인한 환경별 랭킹 데이터 불일치 위험

Line 66에서 LocalDate.now()를 직접 사용하면, 노드별 시스템 타임존이 다를 때 동일 시점에 서로 다른 yyyyMMdd 키를 조회하게 되어 랭킹 공백 또는 불일치가 발생할 수 있다. Clock을 주입받아 고정된 ZoneId를 통해 날짜를 계산해야 모든 노드가 일관된 기준으로 동작한다. 자정 경계 시점(23:59:59 → 00:00:00)에서 키 계산이 정확한지 Clock.fixed() 테스트를 추가해야 한다.

수정 예시
+import java.time.Clock;
 import java.time.LocalDate;
 import java.time.format.DateTimeFormatter;
@@
 public class RankingQueryService {
@@
+	private final Clock clock;
@@
 	private String resolveDate(String dateStr) {
 		if (dateStr == null || dateStr.isBlank()) {
-			return LocalDate.now().format(DATE_FORMAT);
+			return LocalDate.now(clock).format(DATE_FORMAT);
 		}
 		return dateStr;
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java`
at line 22, The code uses LocalDate.now() in RankingQueryService (with
DATE_FORMAT = DateTimeFormatter.BASIC_ISO_DATE) which makes date-key computation
depend on the system timezone; change the service to accept and use a
java.time.Clock (injectable in constructor or via setter) and compute
LocalDate.now(clock).atZone(ZoneId) or LocalDate.now(clock.withZone(fixedZone))
so all nodes use a consistent ZoneId when producing the yyyyMMdd key; update any
methods referencing LocalDate.now() to use the injected Clock and add a unit
test using Clock.fixed(...) around the midnight boundary to verify correct key
calculation.

Comment on lines +69 to +70
private static final String TODAY = LocalDate.now().format(DateTimeFormatter.BASIC_ISO_DATE);
private static final String ZSET_KEY = "ranking:all:" + TODAY;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

TODAY/ZSET_KEY 정적 상수는 자정 경계에서 E2E 플래키를 유발한다

Line 69~70은 클래스 로딩 시점의 날짜를 고정한다. 운영 관점이 아니라 테스트 관점에서 CI 실행이 자정을 넘기면 API는 새로운 “오늘” 키를 조회하지만 테스트는 이전 날짜 키에 적재해 간헐 실패가 발생한다. 수정안은 정적 상수를 제거하고 각 테스트 실행 시점에 키를 계산하는 헬퍼 메서드로 바꾸는 것이다. 추가 테스트로 자정 전후를 시뮬레이션하는 고정 Clock 기반 케이스를 넣어 날짜 경계에서도 안정 동작을 확인해야 한다.

수정 예시
-	private static final String TODAY = LocalDate.now().format(DateTimeFormatter.BASIC_ISO_DATE);
-	private static final String ZSET_KEY = "ranking:all:" + TODAY;
+	private String todayKey() {
+		return "ranking:all:" + LocalDate.now().format(DateTimeFormatter.BASIC_ISO_DATE);
+	}
-			redisTemplate.opsForZSet().add(ZSET_KEY, productId1.toString(), 0.8);
+			redisTemplate.opsForZSet().add(todayKey(), productId1.toString(), 0.8);

As per coding guidelines **/*Test*.java: 통합 테스트는 플래키 가능성과 테스트 데이터 준비/정리 전략을 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java`
around lines 69 - 70, Remove the static TODAY and ZSET_KEY constants in
RankingQueryControllerE2ETest and replace them with a non-static helper method
(e.g., getZsetKeyFor(LocalDate date) or getTodayZsetKey()) that computes the key
at test runtime using LocalDate.now() (or an injected Clock) so each test
computes the current key when it runs; update all usages to call that helper
instead of the static fields, and add an additional test case that injects a
fixed Clock to simulate times immediately before/after midnight to verify
behavior across the date boundary.

Comment on lines +28 to +34
@Override
public double calculateScore(long view, long like, long order) {
double viewScore = WEIGHT_VIEW * Math.min(1.0, view / NORM_VIEW);
double likeScore = WEIGHT_LIKE * Math.min(1.0, like / NORM_LIKE);
double orderScore = WEIGHT_ORDER * Math.min(1.0, order / NORM_ORDER);
return viewScore + likeScore + orderScore;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

음수 입력 처리가 누락되었다.

SaturationScorer.sat()x <= 0일 때 0.0을 반환하지만, LinearScorer는 음수 입력 시 음수 점수가 산출된다. 운영 중 잘못된 데이터가 유입되면 랭킹 순서가 왜곡될 수 있다.

🛡️ 음수 clamp 처리 제안
 `@Override`
 public double calculateScore(long view, long like, long order) {
-	double viewScore = WEIGHT_VIEW * Math.min(1.0, view / NORM_VIEW);
-	double likeScore = WEIGHT_LIKE * Math.min(1.0, like / NORM_LIKE);
-	double orderScore = WEIGHT_ORDER * Math.min(1.0, order / NORM_ORDER);
+	double viewScore = WEIGHT_VIEW * Math.min(1.0, Math.max(0, view) / NORM_VIEW);
+	double likeScore = WEIGHT_LIKE * Math.min(1.0, Math.max(0, like) / NORM_LIKE);
+	double orderScore = WEIGHT_ORDER * Math.min(1.0, Math.max(0, order) / NORM_ORDER);
 	return viewScore + likeScore + orderScore;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Override
public double calculateScore(long view, long like, long order) {
double viewScore = WEIGHT_VIEW * Math.min(1.0, view / NORM_VIEW);
double likeScore = WEIGHT_LIKE * Math.min(1.0, like / NORM_LIKE);
double orderScore = WEIGHT_ORDER * Math.min(1.0, order / NORM_ORDER);
return viewScore + likeScore + orderScore;
}
`@Override`
public double calculateScore(long view, long like, long order) {
double viewScore = WEIGHT_VIEW * Math.min(1.0, Math.max(0, view) / NORM_VIEW);
double likeScore = WEIGHT_LIKE * Math.min(1.0, Math.max(0, like) / NORM_LIKE);
double orderScore = WEIGHT_ORDER * Math.min(1.0, Math.max(0, order) / NORM_ORDER);
return viewScore + likeScore + orderScore;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java`
around lines 28 - 34, LinearScorer.calculateScore currently allows negative
view/like/order values and can produce negative scores; clamp each input to zero
before computing scaled scores (e.g., normalize using max(0, value) then apply
the existing Math.min(..., value / NORM_* ) logic) so
viewScore/likeScore/orderScore never become negative; update calculateScore in
class LinearScorer to use non-negative inputs (referencing WEIGHT_VIEW,
NORM_VIEW, WEIGHT_LIKE, NORM_LIKE, WEIGHT_ORDER, NORM_ORDER) and return the sum
as before.

Comment on lines +45 to +55
public void consume(List<ConsumerRecord<String, byte[]>> records, Acknowledgment ack) {
try {
// 1. eventId 수집 + 역직렬화
Set<String> allEventIds = new LinkedHashSet<>();
List<KafkaEventEnvelope> envelopes = new ArrayList<>();

for (ConsumerRecord<String, byte[]> record : records) {
KafkaEventEnvelope envelope = objectMapper.readValue(record.value(), KafkaEventEnvelope.class);
allEventIds.add(envelope.eventId());
envelopes.add(envelope);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4839


🏁 Script executed:

# Spring Kafka batch listener 사용 패턴 검색
rg -t java "KafkaListener" -A 10 -B 2 apps/

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 39756


🏁 Script executed:

# 에러 처리 또는 DLT 관련 설정 검색
rg -t java "DeadLetterPublish|ErrorHandler|DefaultErrorHandler" apps/

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1844


🏁 Script executed:

cat -n apps/commerce-api/src/main/java/com/loopers/queue/waitingqueue/infrastructure/kafka/WaitingQueueConsumer.java | head -100

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 3054


배치 전체 예외 전파로 단일 불량 메시지가 파티션 처리를 영구 정체시킬 수 있다

운영 관점에서 현재 구조는 한 레코드의 역직렬화 또는 payload 처리 실패만으로 배치 전체를 재시도하게 만들어 consumer lag가 누적되고, 동일 poison 메시지로 무한 실패 루프가 발생할 수 있다. 레코드 단위 예외 격리(per-record try-catch + BatchListenerFailedException)로 불량 메시지는 건너뛰고 정상 메시지는 계속 반영하도록 분리해야 한다. 특히 aggregateDelta() 메서드 내 payload 역직렬화(96, 101, 106, 111번 줄)도 동일하게 격리 처리해야 한다. 추가 테스트로 "불량 1건 + 정상 1건" 배치에서 정상 delta 적용, offset 커밋, 다음 배치 처리를 검증해야 한다.

수정 예시
	public void consume(List<ConsumerRecord<String, byte[]>> records, Acknowledgment ack) {
-		try {
-			// 1. eventId 수집 + 역직렬화
-			Set<String> allEventIds = new LinkedHashSet<>();
-			List<KafkaEventEnvelope> envelopes = new ArrayList<>();
-
-			for (ConsumerRecord<String, byte[]> record : records) {
-				KafkaEventEnvelope envelope = objectMapper.readValue(record.value(), KafkaEventEnvelope.class);
-				allEventIds.add(envelope.eventId());
-				envelopes.add(envelope);
-			}
+		// 1. eventId 수집 + 역직렬화 (레코드 단위 격리)
+		Set<String> allEventIds = new LinkedHashSet<>();
+		List<KafkaEventEnvelope> envelopes = new ArrayList<>();
+
+		for (int i = 0; i < records.size(); i++) {
+			ConsumerRecord<String, byte[]> record = records.get(i);
+			try {
+				KafkaEventEnvelope envelope = objectMapper.readValue(record.value(), KafkaEventEnvelope.class);
+				allEventIds.add(envelope.eventId());
+				envelopes.add(envelope);
+			} catch (Exception ex) {
+				log.warn("[RankingCollector] 불량 레코드 스킵(envelope) topic={}, partition={}, offset={}", 
+					record.topic(), record.partition(), record.offset(), ex);
+				continue;
+			}
+		}
+
+		try {
 			// 이미 처리된 이벤트 필터링
 			Set<String> alreadyHandled = rankingScoreService.findAlreadyHandledIds(allEventIds);
 
 			// 2. 배치 내 delta 합산
 			Map<Long, long[]> deltas = new HashMap<>(); // productId → [viewDelta, likeDelta, orderDelta]
 			Set<String> newEventIds = new LinkedHashSet<>();
 
 			for (KafkaEventEnvelope envelope : envelopes) {
 				if (alreadyHandled.contains(envelope.eventId())) continue;
 
 				// 배치 내 중복 eventId 방어 (Set.add가 false면 이미 처리된 이벤트)
 				if (!newEventIds.add(envelope.eventId())) continue;
 
-				aggregateDelta(deltas, envelope.eventType(), envelope.data());
+				try {
+					aggregateDelta(deltas, envelope.eventType(), envelope.data());
+				} catch (Exception ex) {
+					log.warn("[RankingCollector] 불량 레코드 스킵(payload) eventId={}, eventType={}",
+						envelope.eventId(), envelope.eventType(), ex);
+					continue;
+				}
 			}
 
 			// 3. delta 반영 + event_handled 기록
 			if (!deltas.isEmpty()) {
 				rankingScoreService.applyDeltas(deltas, newEventIds);
 			}
 		} catch (Exception e) {
 			log.error("[RankingCollector] 배치 처리 실패", e);
 			throw new RuntimeException(e);
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public void consume(List<ConsumerRecord<String, byte[]>> records, Acknowledgment ack) {
try {
// 1. eventId 수집 + 역직렬화
Set<String> allEventIds = new LinkedHashSet<>();
List<KafkaEventEnvelope> envelopes = new ArrayList<>();
for (ConsumerRecord<String, byte[]> record : records) {
KafkaEventEnvelope envelope = objectMapper.readValue(record.value(), KafkaEventEnvelope.class);
allEventIds.add(envelope.eventId());
envelopes.add(envelope);
}
public void consume(List<ConsumerRecord<String, byte[]>> records, Acknowledgment ack) {
// 1. eventId 수집 + 역직렬화 (레코드 단위 격리)
Set<String> allEventIds = new LinkedHashSet<>();
List<KafkaEventEnvelope> envelopes = new ArrayList<>();
for (int i = 0; i < records.size(); i++) {
ConsumerRecord<String, byte[]> record = records.get(i);
try {
KafkaEventEnvelope envelope = objectMapper.readValue(record.value(), KafkaEventEnvelope.class);
allEventIds.add(envelope.eventId());
envelopes.add(envelope);
} catch (Exception ex) {
log.warn("[RankingCollector] 불량 레코드 스킵(envelope) topic={}, partition={}, offset={}",
record.topic(), record.partition(), record.offset(), ex);
continue;
}
}
try {
// 이미 처리된 이벤트 필터링
Set<String> alreadyHandled = rankingScoreService.findAlreadyHandledIds(allEventIds);
// 2. 배치 내 delta 합산
Map<Long, long[]> deltas = new HashMap<>(); // productId → [viewDelta, likeDelta, orderDelta]
Set<String> newEventIds = new LinkedHashSet<>();
for (KafkaEventEnvelope envelope : envelopes) {
if (alreadyHandled.contains(envelope.eventId())) continue;
// 배치 내 중복 eventId 방어 (Set.add가 false면 이미 처리된 이벤트)
if (!newEventIds.add(envelope.eventId())) continue;
try {
aggregateDelta(deltas, envelope.eventType(), envelope.data());
} catch (Exception ex) {
log.warn("[RankingCollector] 불량 레코드 스킵(payload) eventId={}, eventType={}",
envelope.eventId(), envelope.eventType(), ex);
continue;
}
}
// 3. delta 반영 + event_handled 기록
if (!deltas.isEmpty()) {
rankingScoreService.applyDeltas(deltas, newEventIds);
}
} catch (Exception e) {
log.error("[RankingCollector] 배치 처리 실패", e);
throw new RuntimeException(e);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java`
around lines 45 - 55, The consume method currently fails the entire batch on a
single bad record; change it to per-record exception isolation by wrapping each
record deserialization/processing in its own try-catch inside consume (handle
JSON/objectMapper.readValue failures and any processing errors) and skip the bad
record while collecting successful envelopes and eventIds, then commit offsets
for processed records; similarly, update aggregateDelta to perform per-payload
deserialization/processing try-catch around the sections that call payload
parsing (the parts you noted at aggregateDelta, lines ~96/101/106/111) so a
single malformed delta is skipped and does not abort the whole aggregation, and
when skipping a record throw or record/collect a BatchListenerFailedException
only for that record to allow the consumer to continue with others and ensure
tests cover "1 bad + 1 good" batch yields the good delta applied and offsets
committed.

Hwan0518 and others added 21 commits April 10, 2026 16:24
- CounterResult: 카운터 INCR 결과 (이전/현재 값)
- RankingRedisPort: 랭킹 Redis 적재 포트
- RankingScorer: 점수 계산 전략 포트

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- LinearScorer: 단순 선형 가중치 (view/like/order)
- SaturationScorer: 로그 포화 함수 기반 (인기 상품 과적합 방지)
- ConversionScorer: 전환율 가중치 결합
- ScorerComparisonTest: 전략별 점수 비교 테스트

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ZSet/HASH로 일별 카운터 INCR + 점수 적재
- RankingRedisAdapterTest: TestContainers 기반 통합 테스트

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- 카운터 증가 + Scorer로 delta 점수 산출 + ZSet ZINCRBY 반영
- 점수 적재 유스케이스 단일 진입점

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- catalog Kafka 토픽(view/like/order) 구독 → RankingScoreService 위임
- 멱등 처리 + 도메인 이벤트 → 점수 적재 변환

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ProductScoreEntry: ZSet 결과 (productId, score)
- RankingRedisPort: 조회 포트 (top N, rank, size)
- RankingItemOutDto/RankingPageOutDto/RankedResult: 출력 DTO

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- RankingProductReader 포트 + RankingProductInfo DTO
- RankingProductReaderImpl: catalog ProductQueryFacade.findCacheDtosByIds 위임
- ranking → catalog 단방향 의존, 순환 없음

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ZRANGE/ZREVRANGE/ZREVRANK/ZCARD/ZUNIONSTORE 등 조회·캐리오버 연산
- RankingRedisPort 구현체 (조회 전용)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- 특정 상품 순위 단건 조회 (1-based, 미등록 시 null)
- catalog ProductRankingReader ACL이 호출하는 진입점
- rank 단건 조회만 분리 → catalog와의 순환참조 방지

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- RankingQueryService: ZREVRANGE/ZCARD + Cross-BC 상품 정보 조합
- RankingQueryFacade: 페이지네이션 오케스트레이션
- RankingQueryController + Response: GET /api/v1/rankings
- 단위/Facade/E2E 테스트 포함

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- RankingCarryOverService: 전일 ZSet → 오늘 ZSet 가중 합산 (decay)
- RankingCarryOverScheduler: 매일 자정 트리거 (스케줄러 활성/비활성 분리)
- 단위/스케줄러 테스트 포함

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ProductRankingReader 포트 + ProductRankingReaderImpl: ranking BC RankingRankFacade 위임
- ProductDetailOutDto/Response/ProductCacheDto: rank 필드 추가
- ProductQueryService.getProductRank: ranking 장애 시 null 폴백 (보조 정보 격리)
- ProductQueryFacade.getProduct: 캐시 응답에 rank 보강
- ProductQueryFacade/Service.findCacheDtosByIds: ranking ACL용 일괄 조회 노출
- 캐시 테스트는 ProductDetailOutDto 시그니처 변경에 맞춰 갱신

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- application-test.yml에 ranking.carryover.scheduler.enabled=false 추가
- 통합 테스트가 자정 트리거에 영향받지 않도록 격리

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- GET /api/v1/rankings 요청 샘플 작성

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

♻️ Duplicate comments (3)
apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java (1)

548-565: ⚠️ Potential issue | 🟡 Minor

getProductRank() 테스트가 예외 경로만 검증한다.

지금 상태로는 정상 위임과 미등록(null) 경계 회귀를 놓친다. productRankingReader.getProductRank()가 값을 반환할 때 그대로 전달되는 케이스와 null을 반환할 때 null이 유지되는 케이스를 추가하는 편이 안전하다. 추가 테스트는 성공 1건, 미등록 1건이면 충분하다.

As per coding guidelines **/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java`
around lines 548 - 565, Add two unit tests in ProductQueryServiceTest inside the
GetProductRankTest nested class to cover the missing success and
null-propagation cases: (1) stub productRankingReader.getProductRank(1L) to
return a non-null Long (e.g., 42L) and assert
productQueryService.getProductRank(1L) returns the same value, and (2) stub
productRankingReader.getProductRank(1L) to return null and assert
productQueryService.getProductRank(1L) is null; use the existing given(...)
style and assertions consistent with the class to mirror the exception test
structure.
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java (1)

187-193: ⚠️ Potential issue | 🟠 Major

랭킹 장애 시 요청당 스택트레이스 WARN 로깅은 2차 장애를 유발할 수 있다.

현재 fallback 자체는 맞지만, 랭킹 조회 장애가 나면 상품 상세 트래픽에 비례해 WARN+stacktrace가 누적되어 로그 I/O와 수집 비용이 급증한다. e를 로거 인자로 넘기지 말고 productId, e.toString() 정도만 남긴 뒤, 실패 건수는 별도 메트릭으로 집계하는 편이 안전하다. 추가 테스트로 productRankingReader.getProductRank()가 반복 예외를 던져도 null 반환이 유지되고 실패 카운터가 누적되는지 검증하는 것이 좋다.

As per coding guidelines **/*Service*.java: 외부 호출에는 타임아웃/재시도/서킷브레이커 고려 여부를 점검하고, 실패 시 대체 흐름을 제안한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`
around lines 187 - 193, The getProductRank method currently logs the full
exception for each ranking failure which can flood logs; change
productRankingReader.getProductRank(productId) error handling to log only
productId and e.toString() (not pass the exception object) via
log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null. productId={}, error={}",
productId, e.toString()), increment a dedicated failure metric/counter (e.g.,
productRankingFailureCounter.increment()) inside the catch, and ensure the
method still returns null; also review productRankingReader external call for
timeout/retry/circuit-breaker considerations and add a unit/integration test
that simulates repeated exceptions from productRankingReader.getProductRank() to
assert null is returned and the failure counter increments.
apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java (1)

52-98: ⚠️ Potential issue | 🔴 Critical

불량 레코드 1건이 배치 전체를 장시간 정체시킬 수 있다.

Line 58의 envelope 역직렬화나 Line 76의 payload 파싱 예외 한 건이 outer catch로 올라가면 Line 102의 ack가 실행되지 않아 동일 poison 레코드로 배치 전체가 계속 재시도된다. 운영에서는 consumer lag가 고착되고 정상 레코드까지 함께 막힌다. 레코드 단위 try-catch로 envelope 파싱과 aggregateDelta()를 분리하고, 성공적으로 delta를 만든 이벤트만 newEventIds에 넣어 persistDeltas() 하도록 바꾸는 편이 안전하다. "불량 1건 + 정상 1건" 배치에서 정상 delta만 반영되고 ack가 수행되는 테스트가 필요하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java`
around lines 52 - 98, The batch consumer currently lets a single bad record's
exception (from objectMapper.readValue or aggregateDelta) escape the per-batch
try/catch and block ack; change consume(...) so deserialization
(objectMapper.readValue) and payload parsing inside aggregateDelta(...) are
wrapped in a record-level try/catch: for each ConsumerRecord handle errors
locally (log and skip) and only add envelope.eventId() into newEventIds and
update deltas when parsing/aggregation succeeds; ensure exceptions from
individual records do not propagate to the outer catch so ack still runs; add a
unit/integration test for consume(...) that includes one malformed record plus
one valid record and asserts only the valid delta is persisted via
persistDeltas(...) and ack is called.
🧹 Nitpick comments (3)
apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/ConversionScorer.java (1)

24-35: 가중치·k 상수는 운영 튜닝성을 위해 외부 설정 분리를 권장한다

왜 문제인지(운영 관점): 현재 하드코딩 상수는 랭킹 왜곡/민원 대응 시 즉시 튜닝이 어렵고, 값 조정마다 재배포가 필요해 장애 대응 시간이 길어진다.
수정안: WEIGHT_*, K_*를 설정값으로 분리하고, 시작 시 k > 0, 가중치 합≈1 검증을 추가해 잘못된 배포를 fail-fast 처리하는 구성이 바람직하다.
추가 테스트: (1) 기본 설정 로딩 시 기존 점수와 동일성 테스트, (2) 잘못된 설정값(k<=0, 가중치 합 이탈)에서 시작 실패/검증 예외 테스트를 추가하는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/ConversionScorer.java`
around lines 24 - 35, Extract the hardcoded constants (WEIGHT_ORDER_RATE,
WEIGHT_LIKE_RATE, K_ORDER_RATE, K_LIKE_RATE, K_CONFIDENCE) from ConversionScorer
into external configuration (e.g., a config/properties/Environment-backed
settings class) and load them at application startup; add validation in
ConversionScorer initialization to assert each K_* > 0 and that
WEIGHT_ORDER_RATE + WEIGHT_LIKE_RATE is approximately 1 (within a small epsilon)
and throw a clear exception to fail-fast on invalid configs; keep default values
equal to the current constants for backward compatibility and add
unit/integration tests that (1) verify default config yields existing score
behavior and (2) assert startup fails when K_* <= 0 or weight sum deviates
beyond epsilon.
apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/scorer/RankingScorerConfig.java (1)

30-46: SATURATION 수식 하드코딩 중복으로 배치/스트리머 점수 드리프트 위험이 있다.

운영 관점에서 Line 30~46의 상수 중복은 한쪽만 변경되는 순간 재계산 결과와 실시간 결과가 어긋나 장애 분석 시간을 키운다. 이 파일 내 상수를 private static final로 고정하고, 스트리머 구현과 동일한 기준 입력값 회귀 테스트를 추가해 수식 파리티를 잠그는 것이 안전하다. 추가 테스트로 (view=100, like=10, order=3), (0,0,0), (1000,100,30) 케이스에서 기대 점수(각각 0.5, 0.0, 상한 근접)를 고정 검증하는 것을 권장한다.

권장 수정 예시
 public class RankingScorerConfig {
+	private static final double W_VIEW = 0.15;
+	private static final double W_LIKE = 0.35;
+	private static final double W_ORDER = 0.50;
+	private static final double K_VIEW = 100.0;
+	private static final double K_LIKE = 10.0;
+	private static final double K_ORDER = 3.0;

 	private static double sat(double x, double k) {
 		if (x <= 0) return 0.0;
 		return x / (x + k);
 	}

 	`@Bean`
 	public Map<String, RankingScorer> rankingScorerMap() {
 		return Map.of(
 			"SATURATION", (view, like, order) ->
-				0.15 * sat(view, 100) + 0.35 * sat(like, 10) + 0.50 * sat(order, 3),
+				W_VIEW * sat(view, K_VIEW) + W_LIKE * sat(like, K_LIKE) + W_ORDER * sat(order, K_ORDER),
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java (1)

49-54: rank 보강 시 DTO를 수동 재조립하지 않는 편이 유지보수에 안전하다.

현재 방식은 ProductDetailOutDto 필드가 하나라도 추가되면 rank가 있는 응답에서만 값 누락이 생길 수 있다. ProductDetailOutDto.withRank(rank) 같은 전용 메서드나 팩토리로 보강 경로를 단일화하는 편이 안전하다. 추가 테스트로 rank 보강 후 rank 외의 모든 필드가 원본 base와 동일한지 함께 검증하는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java`
around lines 49 - 54, The current manual reassembly of ProductDetailOutDto when
adding rank is fragile; instead add a dedicated factory/instance method on
ProductDetailOutDto (e.g. ProductDetailOutDto.withRank(int rank) or a static
ProductDetailOutDto.from(base).withRank(rank)) and use that from
ProductQueryFacade to produce the ranked DTO rather than manually calling the
constructor with each field; also add a unit test asserting that after
withRank(rank) all fields except rank are equal to the original base instance to
prevent regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java`:
- Around line 37-47: getProduct() currently runs as one `@Transactional` unit
including the auxiliary getProductRank() call which can prolong DB TX time;
extract the DB-critical parts into their own transaction and perform rank lookup
outside it. Refactor by creating a new transactional method (e.g.,
loadDetailAndSaveOutbox(Long id, Long userId) or make getOrLoadProductDetail()
and saveViewOutbox() execute in a separate `@Transactional` boundary) and have
getProduct(Long id, Long userId) call that transactional helper first, then call
getProductRank(id) after the transaction completes (or asynchronously) and merge
the rank into ProductDetailOutDto; add tests to assert that rank failures/delays
do not prevent the detail + outbox path from committing.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRebuildFacade.java`:
- Around line 61-66: The parseDate method currently calls
LocalDate.parse(dateStr, DATE_FORMAT) which will throw a NPE for null inputs and
result in a 500; update parseDate(String dateStr) to explicitly check for null
or blank (e.g., using StringUtils.hasText or trim/isEmpty) and if missing throw
new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " +
dateStr) before parsing; keep the existing DateTimeParseException catch to map
invalid formats to the same CoreException. Also add unit/integration tests for
the facade endpoints that pass null/blank for from and to to assert a 400
response and the standard error JSON format.
- Around line 44-46: Normalize and validate scorerType in RankingRebuildFacade:
replace the current fallback assignment for scorerType with logic that trims and
upper-cases inDto.scorerType(), defaults to DEFAULT_SCORER_TYPE only if
null/blank after trim, and then validate against allowed values
{"SATURATION","LINEAR","CONVERSION"}; if the normalized value is not in that set
throw new CoreException(HttpStatus.BAD_REQUEST) (or CoreException(BAD_REQUEST)
as used) to reject invalid inputs immediately. Update any related logic that
uses scorerType to use the normalized variable name and add unit tests to assert
that " saturation " is accepted (normalized to "SATURATION") and that "UNKNOWN"
returns a 400 bad request.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java`:
- Around line 97-104: When Redis operations in RankingCarryOverService (calls to
rankingRedisPort.batchZadd and rankingRedisPort.setTtl) throw exceptions, do not
just log and swallow them; instead upsert a row into the
ranking_projection_dirty table with statDate=tomorrowStr and a failure reason
(e.g., CARRY_OVER_FAIL or distinct reasons for batchZadd vs setTtl) so
RankingReconcileTasklet can pick it up; ensure each failure path (batchZadd
failure and setTtl failure) performs its own upsert before
continuing/suppressing the exception so the scheduler keeps running, and add
tests verifying that a dirty row is created when batchZadd or setTtl fails and
that the reconcile job will reprocess that date.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingAdminCommandController.java`:
- Line 35: Remove the `@Valid` annotation from the RankingAdminCommandController
method parameter (AdminRankingRebuildRequest request) so validation failures do
not trigger Bean Validation/MethodArgumentNotValidException; instead
move/implement request validation inside the facade/domain layer and throw
CoreException on validation failure (use the existing CoreException type) so all
errors flow to ApiControllerAdvice; update or add unit/integration tests to
assert that missing or blank mandatory fields on AdminRankingRebuildRequest
produce the CoreException-based standardized error response via
ApiControllerAdvice.

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java`:
- Around line 105-124: The test getProductWithRank currently only asserts id and
rank, so update it to also verify that all other fields from the original
ProductDetailOutDto (brandName, price, description, likeCount, etc.) are
preserved when productQueryFacade.getProduct(...) reconstructs the DTO: use the
existing detailOutDto fixture and assert equality of its fields (brandName,
price, description, likeCount and any other non-rank fields) against result (or
assert result.equalsExceptRank(detailOutDto) conceptually) in addition to the
rank assertion, while keeping the existing verifications of
productQueryService.getProductRank(1L) and
productQueryService.saveViewOutbox(10L, 1L).

In
`@apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingAdminCommandControllerE2ETest.java`:
- Around line 144-159: The test rebuildWithoutAuth currently always calls
.header(ADMIN_LDAP_HEADER, ldapHeader != null ? ldapHeader : "") so a null case
becomes an empty header and never exercises the "header not sent" path; update
the test(s) so that when ldapHeader is null you omit the .header(...) call
entirely (or split into two tests: one that omits the header and one that passes
empty/whitespace values) to verify both "header absent" and "header
present-but-empty" produce 401; locate the LDAP header constant
ADMIN_LDAP_HEADER and the mockMvc.perform(...) invocation in rebuildWithoutAuth
to implement the conditional omission or test split.

In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/RankingRebuildTasklet.java`:
- Around line 128-131: When allProductIds.isEmpty() in RankingRebuildTasklet,
don't return immediately — first delete the existing ranking_daily_score row for
the target date and remove the Redis ZSET at key pattern ranking:all:{yyyyMMdd},
then clear prevDayScores and return 0; update the branch that currently logs and
returns to call the existing DB delete method (or add one) and the Redis delete
logic before prevDayScores.clear(), and add a unit/integration test that seeds a
score row and ZSET then runs the rebuild when counters/carry are empty to assert
both DB row and Redis ZSET are removed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java`:
- Around line 76-103: The reflectToRedis method can leave created Redis keys
without TTL if a later entry throws; change the flow so TTL is guaranteed for
dates that succeeded before an exception: wrap the per-entry processing in
try/catch/finally or call rankingRedisPort.ensureTtl(dateStr, TTL) immediately
after a successful increment for that specific date (use the existing
affectedDates set to track successes), and if you adopt a finally approach
ensure you call ensureTtl for all entries in affectedDates before rethrowing the
exception; update unit/integration tests to simulate a failure on the second
Redis operation and assert that the first date's HASH/ZSET received ensureTtl.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingDailyCounterCommandAdapter.java`:
- Around line 66-74: In RankingDailyCounterCommandAdapter (inside the
setValues/loop that reads Map.Entry<RankingDailyKey,long[]>), validate the delta
array before calling ps.setLong: check delta != null and delta.length == 3, and
handle violations explicitly (either throw a clear IllegalArgumentException with
context including key.statDate() and key.productId(), or filter/skip the entry
and log a warning) so a malformed entry cannot cause an unhandled runtime
exception for the whole batch; add unit tests covering delta == null,
delta.length == 2 and delta.length == 4 to assert the chosen failure or skip
behavior is consistent.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingDailyScoreCommandAdapter.java`:
- Around line 49-67: The upsertOrganicScores method lacks null/blank and numeric
validation causing batch failures; update upsertOrganicScores to first validate
scorerType (non-null, non-blank) and throw IllegalArgumentException with a clear
message if invalid, then filter organicScores.entries into a new list (use the
existing entries variable) keeping only entries with non-null Double values that
are finite (not NaN or Infinity); if the filtered list is empty return early,
and before calling jdbcTemplate.batchUpdate log or collect warning info about
skipped keys/values for observability; use the filtered list inside the
BatchPreparedStatementSetter and add unit tests covering scorerType=null/blank
and organicScore=null/NaN/Infinity cases to assert correct skips or exception
messages (reference symbols: upsertOrganicScores, organicScores, entries,
UPSERT_ORGANIC_SQL, jdbcTemplate.batchUpdate).

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingProjectionDirtyAdapter.java`:
- Around line 47-60: The markDirty(Set<LocalDate> dates, String reason) path can
fail when reason is null/blank or >32 chars; update the markDirty implementation
to validate and normalize the reason before the JDBC batch: if reason is null or
blank replace with a safe default (e.g., "UNKNOWN" or an enum name), trim it,
and truncate to the DB limit (32 chars) before passing into
jdbcTemplate.batchUpdate/BatchPreparedStatementSetter (refer to markDirty,
UPSERT_SQL, jdbcTemplate.batchUpdate, BatchPreparedStatementSetter); also add
unit tests for null, blank, and over-32-char inputs to verify the adapter writes
a normalized reason and does not throw.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java`:
- Around line 79-94: The current flow in RankingCollectorConsumer calls
rankingScoreService.persistDeltas(...) then attempts
rankingScoreService.reflectToRedis(...) and
rankingScoreService.markProjectionDirty(...), but if markProjectionDirty() fails
(after DB commit) the dirty flag can be permanently lost on retry; change the
implementation so markProjectionDirty is executed outside the DB transaction in
a retriable, idempotent path (e.g., push a durable task/message or call a
separate transactional service method that retries/uses an upsert within its own
transaction) whenever reflectToRedis() fails, ensure markProjectionDirty accepts
the Set<LocalDate> (from
deltas.keySet().stream().map(RankingDailyKey::statDate)) and is resilient to
duplicate calls, and add an integration test covering the sequence: DB commit
succeeds -> reflectToRedis throws -> markProjectionDirty initially fails -> on
replay the system still records the dirty date (i.e., ensure placement of the
durable task or retrying transactional service records the dirty mark).

---

Duplicate comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`:
- Around line 187-193: The getProductRank method currently logs the full
exception for each ranking failure which can flood logs; change
productRankingReader.getProductRank(productId) error handling to log only
productId and e.toString() (not pass the exception object) via
log.warn("[ProductQueryService] 랭킹 조회 실패, rank=null. productId={}, error={}",
productId, e.toString()), increment a dedicated failure metric/counter (e.g.,
productRankingFailureCounter.increment()) inside the catch, and ensure the
method still returns null; also review productRankingReader external call for
timeout/retry/circuit-breaker considerations and add a unit/integration test
that simulates repeated exceptions from productRankingReader.getProductRank() to
assert null is returned and the failure counter increments.

In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java`:
- Around line 548-565: Add two unit tests in ProductQueryServiceTest inside the
GetProductRankTest nested class to cover the missing success and
null-propagation cases: (1) stub productRankingReader.getProductRank(1L) to
return a non-null Long (e.g., 42L) and assert
productQueryService.getProductRank(1L) returns the same value, and (2) stub
productRankingReader.getProductRank(1L) to return null and assert
productQueryService.getProductRank(1L) is null; use the existing given(...)
style and assertions consistent with the class to mirror the exception test
structure.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java`:
- Around line 52-98: The batch consumer currently lets a single bad record's
exception (from objectMapper.readValue or aggregateDelta) escape the per-batch
try/catch and block ack; change consume(...) so deserialization
(objectMapper.readValue) and payload parsing inside aggregateDelta(...) are
wrapped in a record-level try/catch: for each ConsumerRecord handle errors
locally (log and skip) and only add envelope.eventId() into newEventIds and
update deltas when parsing/aggregation succeeds; ensure exceptions from
individual records do not propagate to the outer catch so ack still runs; add a
unit/integration test for consume(...) that includes one malformed record plus
one valid record and asserts only the valid delta is persisted via
persistDeltas(...) and ack is called.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java`:
- Around line 49-54: The current manual reassembly of ProductDetailOutDto when
adding rank is fragile; instead add a dedicated factory/instance method on
ProductDetailOutDto (e.g. ProductDetailOutDto.withRank(int rank) or a static
ProductDetailOutDto.from(base).withRank(rank)) and use that from
ProductQueryFacade to produce the ranked DTO rather than manually calling the
constructor with each field; also add a unit test asserting that after
withRank(rank) all fields except rank are equal to the original base instance to
prevent regressions.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/ConversionScorer.java`:
- Around line 24-35: Extract the hardcoded constants (WEIGHT_ORDER_RATE,
WEIGHT_LIKE_RATE, K_ORDER_RATE, K_LIKE_RATE, K_CONFIDENCE) from ConversionScorer
into external configuration (e.g., a config/properties/Environment-backed
settings class) and load them at application startup; add validation in
ConversionScorer initialization to assert each K_* > 0 and that
WEIGHT_ORDER_RATE + WEIGHT_LIKE_RATE is approximately 1 (within a small epsilon)
and throw a clear exception to fail-fast on invalid configs; keep default values
equal to the current constants for backward compatibility and add
unit/integration tests that (1) verify default config yields existing score
behavior and (2) assert startup fails when K_* <= 0 or weight sum deviates
beyond epsilon.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5a90c5fd-563e-49aa-bf54-489d3f009b30

📥 Commits

Reviewing files that changed from the base of the PR and between 929c086 and 654c72b.

⛔ Files ignored due to path filters (26)
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductDetailOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/cache/dto/ProductCacheDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/client/ranking/ProductRankingReader.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/AdminRankingRebuildOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankedResult.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankingItemOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/out/RankingPageOutDto.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/ProductScoreEntry.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/RankingDailyScoreCarryWritePort.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/RankingDailyScoreReadPort.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/RankingRedisPort.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/client/catalog/RankingProductInfo.java is excluded by !**/out/** and included by **
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/port/out/client/catalog/RankingProductReader.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/CounterResult.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingDailyCounterCommandPort.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingDailyScoreCommandPort.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingProjectionDirtyPort.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingRedisPort.java is excluded by !**/out/** and included by **
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/port/out/RankingScorer.java is excluded by !**/out/** and included by **
  • round9-docs/00-requirements.md is excluded by !**/*.md and included by **
  • round9-docs/01-information.md is excluded by !**/*.md and included by **
  • round9-docs/02-study-data.md is excluded by !**/*.md and included by **
  • round9-docs/03-study-deepdive.md is excluded by !**/*.md and included by **
  • round9-docs/04-decisions.md is excluded by !**/*.md and included by **
  • round9-docs/05-e2e-verification.md is excluded by !**/*.md and included by **
  • round9-docs/06-rebuild-ready-design.md is excluded by !**/*.md and included by **
📒 Files selected for processing (65)
  • .http/ranking.http
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/interfaces/web/response/ProductDetailResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/in/AdminRankingRebuildInDto.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingQueryFacade.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRankFacade.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRebuildFacade.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/acl/catalog/RankingProductReaderImpl.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/jdbc/RankingDailyScoreJdbcAdapter.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingAdminCommandController.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/request/AdminRankingRebuildRequest.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/AdminRankingRebuildResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingItemResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/facade/RankingQueryFacadeTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/scheduler/RankingCarryOverSchedulerTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingAdminCommandControllerE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java
  • apps/commerce-api/src/test/resources/application-test.yml
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingRebuildJobConfig.java
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingReconcileJobConfig.java
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/scorer/RankingScorer.java
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/scorer/RankingScorerConfig.java
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/RankingRebuildTasklet.java
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/RankingReconcileTasklet.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/dto/RankingDailyKey.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingDailyCounterCommandAdapter.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingDailyScoreCommandAdapter.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingProjectionDirtyAdapter.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/RankingDailyCounterEntity.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/RankingDailyCounterId.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/RankingDailyScoreEntity.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/RankingDailyScoreId.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/RankingProjectionDirtyEntity.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/entity/RankingProjectionDirtyId.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/jpa/RankingDailyCounterJpaRepository.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/jpa/RankingDailyScoreJpaRepository.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/jpa/RankingProjectionDirtyJpaRepository.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/ConversionScorer.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/SaturationScorer.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/RankingDailyCounterCommandAdapterTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/RankingDailyScoreCommandAdapterTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapterTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/scorer/ScorerComparisonTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumerTest.java
✅ Files skipped from review due to trivial changes (17)
  • apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/scorer/RankingScorer.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/dto/in/AdminRankingRebuildInDto.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/application/dto/RankingDailyKey.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/jpa/RankingDailyScoreJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/request/AdminRankingRebuildRequest.java
  • apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingItemResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/acl/catalog/RankingProductReaderImpl.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/acl/ranking/ProductRankingReaderImpl.java
  • .http/ranking.http
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/AdminRankingRebuildResponse.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/response/RankingPageResponse.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumerTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/application/service/RankingScoreServiceTest.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapterTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/facade/RankingQueryFacadeTest.java
🚧 Files skipped from review as they are similar to previous changes (17)
  • apps/commerce-api/src/test/resources/application-test.yml
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRankFacade.java
  • apps/commerce-api/src/main/java/com/loopers/catalog/product/interfaces/web/response/ProductDetailResponse.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/scheduler/RankingCarryOverSchedulerTest.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingRankService.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/scorer/LinearScorer.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/scheduler/RankingCarryOverScheduler.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingQueryController.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingQueryService.java
  • apps/commerce-streamer/src/test/java/com/loopers/ranking/infrastructure/scorer/ScorerComparisonTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingRankServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingQueryServiceTest.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingQueryFacade.java
  • apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
  • apps/commerce-api/src/main/java/com/loopers/ranking/infrastructure/redis/RankingRedisAdapter.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/application/service/RankingCarryOverServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/ranking/interfaces/RankingQueryControllerE2ETest.java

Comment on lines 37 to 47
@Transactional
public ProductDetailOutDto getProduct(Long id, Long userId) {

// Read Model에서 ProductCacheDto → ProductDetailOutDto 변환 (캐시 적용)
ProductDetailOutDto result = productQueryService.getOrLoadProductDetail(id);
// Read Model에서 ProductCacheDto → ProductDetailOutDto 변환 (캐시 적용, rank=null)
ProductDetailOutDto base = productQueryService.getOrLoadProductDetail(id);

// 오늘의 랭킹 순위 조회 (ranking BC)
Long rank = productQueryService.getProductRank(id);

// VIEW Outbox 저장 (같은 TX — At Least Once 보장, D7)
productQueryService.saveViewOutbox(userId, id);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

보조 랭킹 조회를 쓰기 트랜잭션 안에서 수행하지 않는 편이 안전하다.

지금은 getProduct() 전체가 트랜잭션이고 그 안에서 랭킹 조회를 먼저 수행하므로, 랭킹 저장소가 느려질 때 상품 상세 요청마다 DB 커넥션과 트랜잭션 유지 시간이 불필요하게 늘어난다. getOrLoadProductDetail()saveViewOutbox()만 별도 트랜잭션 메서드로 묶고, rank 보강은 그 밖에서 수행하도록 분리하는 것이 안전하다. 추가 테스트로 랭킹 조회를 지연 또는 실패시키더라도 상품 조회와 outbox 저장 경로가 독립적으로 완료되는지 검증하는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java`
around lines 37 - 47, getProduct() currently runs as one `@Transactional` unit
including the auxiliary getProductRank() call which can prolong DB TX time;
extract the DB-critical parts into their own transaction and perform rank lookup
outside it. Refactor by creating a new transactional method (e.g.,
loadDetailAndSaveOutbox(Long id, Long userId) or make getOrLoadProductDetail()
and saveViewOutbox() execute in a separate `@Transactional` boundary) and have
getProduct(Long id, Long userId) call that transactional helper first, then call
getProductRank(id) after the transaction completes (or asynchronously) and merge
the rank into ProductDetailOutDto; add tests to assert that rank failures/delays
do not prevent the detail + outbox path from committing.

Comment on lines +44 to +46
String scorerType = (inDto.scorerType() != null && !inDto.scorerType().isBlank())
? inDto.scorerType()
: DEFAULT_SCORER_TYPE;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

scorerType 미검증으로 ACCEPTED 이후 배치 실패를 유발할 수 있다.

운영 관점에서 Line 44~46은 잘못된 scorerType을 조기에 차단하지 않아, API는 202를 반환했는데 실제 배치가 나중에 실패하는 형태의 지연 장애를 만든다. 입력값을 trim + upper-case로 정규화하고 허용값(SATURATION, LINEAR, CONVERSION) 외에는 CoreException(BAD_REQUEST)로 즉시 거절하는 것이 안전하다. 추가 테스트로 " saturation " 입력 정규화 성공과 "UNKNOWN" 입력 400 실패를 검증해야 한다.

수정 예시
+import java.util.Locale;
+import java.util.Set;
...
+	private static final Set<String> ALLOWED_SCORERS = Set.of("SATURATION", "LINEAR", "CONVERSION");
...
-		String scorerType = (inDto.scorerType() != null && !inDto.scorerType().isBlank())
-			? inDto.scorerType()
-			: DEFAULT_SCORER_TYPE;
+		String scorerType = (inDto.scorerType() != null && !inDto.scorerType().isBlank())
+			? inDto.scorerType().trim().toUpperCase(Locale.ROOT)
+			: DEFAULT_SCORER_TYPE;
+		if (!ALLOWED_SCORERS.contains(scorerType)) {
+			throw new CoreException(ErrorType.BAD_REQUEST, "지원하지 않는 scorerType 입니다: " + scorerType);
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
String scorerType = (inDto.scorerType() != null && !inDto.scorerType().isBlank())
? inDto.scorerType()
: DEFAULT_SCORER_TYPE;
import java.util.Locale;
import java.util.Set;
// ... other imports
public class RankingRebuildFacade {
private static final Set<String> ALLOWED_SCORERS = Set.of("SATURATION", "LINEAR", "CONVERSION");
// ... other code ...
String scorerType = (inDto.scorerType() != null && !inDto.scorerType().isBlank())
? inDto.scorerType().trim().toUpperCase(Locale.ROOT)
: DEFAULT_SCORER_TYPE;
if (!ALLOWED_SCORERS.contains(scorerType)) {
throw new CoreException(ErrorType.BAD_REQUEST, "지원하지 않는 scorerType 입니다: " + scorerType);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRebuildFacade.java`
around lines 44 - 46, Normalize and validate scorerType in RankingRebuildFacade:
replace the current fallback assignment for scorerType with logic that trims and
upper-cases inDto.scorerType(), defaults to DEFAULT_SCORER_TYPE only if
null/blank after trim, and then validate against allowed values
{"SATURATION","LINEAR","CONVERSION"}; if the normalized value is not in that set
throw new CoreException(HttpStatus.BAD_REQUEST) (or CoreException(BAD_REQUEST)
as used) to reject invalid inputs immediately. Update any related logic that
uses scorerType to use the normalized variable name and add unit tests to assert
that " saturation " is accepted (normalized to "SATURATION") and that "UNKNOWN"
returns a 400 bad request.

Comment on lines +61 to +66
private LocalDate parseDate(String dateStr) {
try {
return LocalDate.parse(dateStr, DATE_FORMAT);
} catch (DateTimeParseException e) {
throw new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " + dateStr);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

parseDate의 null 입력이 500으로 누수될 수 있다.

운영 관점에서 Line 61~66은 dateStr == null일 때 LocalDate.parse가 NPE를 던져 표준 BAD_REQUEST가 아닌 500으로 응답될 수 있다. null/blank를 먼저 검사해 CoreException(BAD_REQUEST)로 변환해야 장애 분류와 알람 라우팅이 일관된다. 추가 테스트로 from 또는 to가 null/blank인 요청에서 400과 표준 에러 포맷이 유지되는지 검증해야 한다.

수정 예시
 	private LocalDate parseDate(String dateStr) {
+		if (dateStr == null || dateStr.isBlank()) {
+			throw new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " + dateStr);
+		}
 		try {
 			return LocalDate.parse(dateStr, DATE_FORMAT);
 		} catch (DateTimeParseException e) {
 			throw new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " + dateStr);
 		}
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private LocalDate parseDate(String dateStr) {
try {
return LocalDate.parse(dateStr, DATE_FORMAT);
} catch (DateTimeParseException e) {
throw new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " + dateStr);
}
private LocalDate parseDate(String dateStr) {
if (dateStr == null || dateStr.isBlank()) {
throw new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " + dateStr);
}
try {
return LocalDate.parse(dateStr, DATE_FORMAT);
} catch (DateTimeParseException e) {
throw new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " + dateStr);
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/facade/RankingRebuildFacade.java`
around lines 61 - 66, The parseDate method currently calls
LocalDate.parse(dateStr, DATE_FORMAT) which will throw a NPE for null inputs and
result in a 500; update parseDate(String dateStr) to explicitly check for null
or blank (e.g., using StringUtils.hasText or trim/isEmpty) and if missing throw
new CoreException(ErrorType.BAD_REQUEST, "날짜 형식이 올바르지 않습니다 (yyyyMMdd): " +
dateStr) before parsing; keep the existing DateTimeParseException catch to map
invalid formats to the same CoreException. Also add unit/integration tests for
the facade endpoints that pass null/blank for from and to to assert a 400
response and the standard error JSON format.

Comment on lines +97 to +104
// 4. Redis: ZADD + TTL (best-effort — 실패 시 로그 경고, reconcile job(S8)에서 복구 예정)
// TODO: S8 reconcile job 구현 후 dirty mark 추가 (CARRY_OVER_FAIL reason)
try {
rankingRedisPort.batchZadd(tomorrowStr, redisScores);
rankingRedisPort.setTtl(tomorrowStr, TTL);
} catch (Exception e) {
log.error("[RankingCarryOver] Redis carry-over 실패 → 다음 날 ranking 콜드 스타트 누락 위험. reconcile 필요. date={}", tomorrowStr, e);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Redis 실패 시 dirty 마킹이 없어 복구 경로가 끊긴다.

현재는 Redis 예외를 로그만 남기고 삼키므로, ranking_projection_dirty에 내일 날짜가 기록되지 않는다. 운영 중 batchZadd()setTtl()이 실패하면 RankingReconcileTasklet이 이 날짜를 다시 처리할 근거가 없어져서, 다음 날 랭킹이 트래픽이 쌓일 때까지 비어 있거나 오래된 상태로 남을 수 있다. 예외를 삼키기 전에 statDate=tomorrow와 실패 사유를 dirty 테이블에 upsert하도록 바꾸고, 스케줄러는 계속 진행하게 분리하는 편이 맞다. 추가로 batchZadd() 실패와 setTtl() 실패 각각에서 dirty row가 생성되고, 이후 reconcile이 해당 날짜를 재처리하는 테스트를 넣어 두는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/application/service/RankingCarryOverService.java`
around lines 97 - 104, When Redis operations in RankingCarryOverService (calls
to rankingRedisPort.batchZadd and rankingRedisPort.setTtl) throw exceptions, do
not just log and swallow them; instead upsert a row into the
ranking_projection_dirty table with statDate=tomorrowStr and a failure reason
(e.g., CARRY_OVER_FAIL or distinct reasons for batchZadd vs setTtl) so
RankingReconcileTasklet can pick it up; ensure each failure path (batchZadd
failure and setTtl failure) performs its own upsert before
continuing/suppressing the exception so the scheduler keeps running, and add
tests verifying that a dirty row is created when batchZadd or setTtl fails and
that the reconcile job will reprocess that date.

@PostMapping("/rebuild")
public ResponseEntity<AdminRankingRebuildResponse> requestRebuild(
@RequestHeader(value = "X-Loopers-Ldap", required = false) String ldapHeader,
@Valid @RequestBody AdminRankingRebuildRequest request
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

@Valid 사용은 commerce-api의 단일 CoreException 에러 경로와 충돌한다.

운영 관점에서 Line 35의 @Valid는 요청 검증 실패 시 Bean Validation 예외 경로를 타서 응답 포맷 일관성을 깨뜨릴 수 있다. 컨트롤러에서는 @Valid를 제거하고, 파사드/도메인에서 CoreException으로 검증 실패를 처리해 ApiControllerAdvice 단일 경로를 유지하는 것이 안전하다. 추가 테스트로 필수 필드 누락/공백 요청 시에도 기존 CoreException 기반 표준 에러 응답이 유지되는지 검증해야 한다.

Based on learnings: "In the loop-pack-be-l2-vol3-java project, enforce unified error handling by routing errors through CoreException to ApiControllerAdvice to ensure a consistent response format. Do not introduce MethodArgumentNotValidException handlers or Bean Validation handling, as that would create inconsistent error handling patterns. This guidance applies to all Java files under the commerce-api module (source code under apps/commerce-api/src/main/java/)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/ranking/interfaces/web/controller/RankingAdminCommandController.java`
at line 35, Remove the `@Valid` annotation from the RankingAdminCommandController
method parameter (AdminRankingRebuildRequest request) so validation failures do
not trigger Bean Validation/MethodArgumentNotValidException; instead
move/implement request validation inside the facade/domain layer and throw
CoreException on validation failure (use the existing CoreException type) so all
errors flow to ApiControllerAdvice; update or add unit/integration tests to
assert that missing or blank mandatory fields on AdminRankingRebuildRequest
produce the CoreException-based standardized error response via
ApiControllerAdvice.

Comment on lines +76 to +103
public void reflectToRedis(Map<RankingDailyKey, long[]> deltas) {
Set<LocalDate> affectedDates = new HashSet<>();

for (Map.Entry<RankingDailyKey, long[]> entry : deltas.entrySet()) {
RankingDailyKey key = entry.getKey();
Long productId = key.productId();
long[] delta = entry.getValue();
String dateStr = key.statDate().format(DATE_FORMAT);

// Redis 카운터 갱신 + old/new 카운트 획득
CounterResult counts = rankingRedisPort.incrementCounterAndGetCounts(
dateStr, productId, delta[0], delta[1], delta[2]);

// scorer delta 계산
double scoreDelta = computeScoreDelta(counts);

// ZSET 점수 증분
if (scoreDelta != 0.0) {
rankingRedisPort.incrementScore(dateStr, productId, scoreDelta);
}

affectedDates.add(key.statDate());
}

// TTL 보장 (날짜별 1회)
for (LocalDate date : affectedDates) {
rankingRedisPort.ensureTtl(date.format(DATE_FORMAT), TTL);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

부분 실패 시 TTL이 빠져 Redis 일자 키가 영구 잔존할 수 있다.

Line 86에서 일부 날짜의 Redis 갱신이 성공한 뒤 다음 엔트리에서 예외가 나면 Line 101-103이 실행되지 않아 이미 생성된 HASH/ZSET에 TTL이 붙지 않는다. 운영에서는 만료되지 않는 일자 키가 남아 메모리 누수와 오래된 랭킹 오염을 만든다. 날짜별 첫 성공 시점에 즉시 ensureTtl()를 호출하거나, 최소한 현재까지 성공한 affectedDates에 대해 finally에서 TTL을 보장한 뒤 예외를 다시 전파하는 구조로 바꾸는 편이 안전하다. 두 날짜를 처리할 때 두 번째 Redis 호출을 강제로 실패시켜도 첫 번째 날짜 키에는 TTL이 설정되는 테스트가 필요하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/application/service/RankingScoreService.java`
around lines 76 - 103, The reflectToRedis method can leave created Redis keys
without TTL if a later entry throws; change the flow so TTL is guaranteed for
dates that succeeded before an exception: wrap the per-entry processing in
try/catch/finally or call rankingRedisPort.ensureTtl(dateStr, TTL) immediately
after a successful increment for that specific date (use the existing
affectedDates set to track successes), and if you adopt a finally approach
ensure you call ensureTtl for all entries in affectedDates before rethrowing the
exception; update unit/integration tests to simulate a failure on the second
Redis operation and assert that the first date's HASH/ZSET received ensureTtl.

Comment on lines +66 to +74
Map.Entry<RankingDailyKey, long[]> entry = entries.get(i);
RankingDailyKey key = entry.getKey();
long[] delta = entry.getValue(); // [view, like, order]

ps.setDate(1, Date.valueOf(key.statDate()));
ps.setLong(2, key.productId());
ps.setLong(3, delta[0]);
ps.setLong(4, delta[1]);
ps.setLong(5, delta[2]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

delta 배열 계약 검증이 없어 런타임 예외로 배치 소비가 중단될 수 있다.

운영 관점에서 집계 입력이 비정상(배열 null/길이 3 미만)일 때 setValues 내부에서 예외가 발생하면 해당 배치 전체가 실패하고 재처리 지연이 누적된다. DB 호출 전에 엔트리 단위로 delta 형식을 검증하고, 위반 건은 명시적 예외 또는 필터링 정책으로 처리하는 방식으로 수정하는 것이 필요하다. 추가 테스트로 delta == null, 길이 2, 길이 4 케이스를 포함해 실패/처리 정책이 일관되게 동작하는지 확인해야 한다.

수정 예시
 	public void upsertDeltas(Map<RankingDailyKey, long[]> deltas) {
 		if (deltas == null || deltas.isEmpty()) {
 			return;
 		}
+		for (Map.Entry<RankingDailyKey, long[]> entry : deltas.entrySet()) {
+			long[] delta = entry.getValue();
+			if (delta == null || delta.length != 3) {
+				throw new IllegalArgumentException("delta must be [view, like, order]");
+			}
+		}
 
 		List<Map.Entry<RankingDailyKey, long[]>> entries = new ArrayList<>(deltas.entrySet());

As per coding guidelines, **/*.java: null 처리와 예외 흐름이 명확한지 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingDailyCounterCommandAdapter.java`
around lines 66 - 74, In RankingDailyCounterCommandAdapter (inside the
setValues/loop that reads Map.Entry<RankingDailyKey,long[]>), validate the delta
array before calling ps.setLong: check delta != null and delta.length == 3, and
handle violations explicitly (either throw a clear IllegalArgumentException with
context including key.statDate() and key.productId(), or filter/skip the entry
and log a warning) so a malformed entry cannot cause an unhandled runtime
exception for the whole batch; add unit tests covering delta == null,
delta.length == 2 and delta.length == 4 to assert the chosen failure or skip
behavior is consistent.

Comment on lines +49 to +67
public void upsertOrganicScores(Map<RankingDailyKey, Double> organicScores, String scorerType) {
if (organicScores == null || organicScores.isEmpty()) {
return;
}

List<Map.Entry<RankingDailyKey, Double>> entries = new ArrayList<>(organicScores.entrySet());
Timestamp now = Timestamp.valueOf(LocalDateTime.now());

jdbcTemplate.batchUpdate(UPSERT_ORGANIC_SQL, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Map.Entry<RankingDailyKey, Double> entry = entries.get(i);
RankingDailyKey key = entry.getKey();

ps.setDate(1, Date.valueOf(key.statDate()));
ps.setString(2, scorerType);
ps.setLong(3, key.productId());
ps.setDouble(4, entry.getValue());
ps.setTimestamp(5, now);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

scorerType/organicScore null 방어가 없어 배치 경로에서 즉시 실패할 수 있다.

운영 관점에서 현재 구현은 scorerType이 비어 있거나 score 값이 null일 때 SQL 예외 또는 NPE로 배치 전체가 실패한다. 메서드 시작 시 scorerType 유효성(공백 포함)과 엔트리별 organicScore null/비정상 숫자 검증을 선행해 실패 원인을 명확히 하고 장애 전파를 줄이는 것이 필요하다. 추가 테스트로 scorerType = null/blank, organicScore = null/NaN/Infinity 케이스를 포함해 예외 메시지와 실패 지점을 고정하는 것이 좋다.

수정 예시
 	public void upsertOrganicScores(Map<RankingDailyKey, Double> organicScores, String scorerType) {
 		if (organicScores == null || organicScores.isEmpty()) {
 			return;
 		}
+		if (scorerType == null || scorerType.isBlank()) {
+			throw new IllegalArgumentException("scorerType must not be blank");
+		}
+		for (Map.Entry<RankingDailyKey, Double> entry : organicScores.entrySet()) {
+			Double score = entry.getValue();
+			if (score == null || !Double.isFinite(score)) {
+				throw new IllegalArgumentException("organicScore must be finite");
+			}
+		}
 
 		List<Map.Entry<RankingDailyKey, Double>> entries = new ArrayList<>(organicScores.entrySet());

As per coding guidelines, **/*.java: null 처리와 예외 흐름이 명확한지 점검한다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public void upsertOrganicScores(Map<RankingDailyKey, Double> organicScores, String scorerType) {
if (organicScores == null || organicScores.isEmpty()) {
return;
}
List<Map.Entry<RankingDailyKey, Double>> entries = new ArrayList<>(organicScores.entrySet());
Timestamp now = Timestamp.valueOf(LocalDateTime.now());
jdbcTemplate.batchUpdate(UPSERT_ORGANIC_SQL, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Map.Entry<RankingDailyKey, Double> entry = entries.get(i);
RankingDailyKey key = entry.getKey();
ps.setDate(1, Date.valueOf(key.statDate()));
ps.setString(2, scorerType);
ps.setLong(3, key.productId());
ps.setDouble(4, entry.getValue());
ps.setTimestamp(5, now);
public void upsertOrganicScores(Map<RankingDailyKey, Double> organicScores, String scorerType) {
if (organicScores == null || organicScores.isEmpty()) {
return;
}
if (scorerType == null || scorerType.isBlank()) {
throw new IllegalArgumentException("scorerType must not be blank");
}
for (Map.Entry<RankingDailyKey, Double> entry : organicScores.entrySet()) {
Double score = entry.getValue();
if (score == null || !Double.isFinite(score)) {
throw new IllegalArgumentException("organicScore must be finite");
}
}
List<Map.Entry<RankingDailyKey, Double>> entries = new ArrayList<>(organicScores.entrySet());
Timestamp now = Timestamp.valueOf(LocalDateTime.now());
jdbcTemplate.batchUpdate(UPSERT_ORGANIC_SQL, new BatchPreparedStatementSetter() {
`@Override`
public void setValues(PreparedStatement ps, int i) throws SQLException {
Map.Entry<RankingDailyKey, Double> entry = entries.get(i);
RankingDailyKey key = entry.getKey();
ps.setDate(1, Date.valueOf(key.statDate()));
ps.setString(2, scorerType);
ps.setLong(3, key.productId());
ps.setDouble(4, entry.getValue());
ps.setTimestamp(5, now);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingDailyScoreCommandAdapter.java`
around lines 49 - 67, The upsertOrganicScores method lacks null/blank and
numeric validation causing batch failures; update upsertOrganicScores to first
validate scorerType (non-null, non-blank) and throw IllegalArgumentException
with a clear message if invalid, then filter organicScores.entries into a new
list (use the existing entries variable) keeping only entries with non-null
Double values that are finite (not NaN or Infinity); if the filtered list is
empty return early, and before calling jdbcTemplate.batchUpdate log or collect
warning info about skipped keys/values for observability; use the filtered list
inside the BatchPreparedStatementSetter and add unit tests covering
scorerType=null/blank and organicScore=null/NaN/Infinity cases to assert correct
skips or exception messages (reference symbols: upsertOrganicScores,
organicScores, entries, UPSERT_ORGANIC_SQL, jdbcTemplate.batchUpdate).

Comment on lines +47 to +60
public void markDirty(Set<LocalDate> dates, String reason) {
if (dates == null || dates.isEmpty()) {
return;
}

List<LocalDate> dateList = new ArrayList<>(dates);
Timestamp now = Timestamp.valueOf(LocalDateTime.now());

jdbcTemplate.batchUpdate(UPSERT_SQL, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setDate(1, Date.valueOf(dateList.get(i)));
ps.setString(2, reason);
ps.setTimestamp(3, now);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

reason 검증 부재로 dirty 마킹 경로가 추가 실패할 수 있다.

운영 관점에서 Redis 반영 실패 시점은 이미 장애 경로인데, reason이 null이거나 32자를 넘으면 DB 배치가 예외로 실패해 dirty 신호 자체가 유실될 수 있다. reason을 고정 코드(또는 enum)로 제한하거나, 최소한 null/길이 정규화를 선행한 뒤 배치에 전달하는 방식으로 수정하는 것이 안전하다. 추가로 null, 공백, 32자 초과 케이스에 대한 어댑터 테스트를 보강하는 것이 필요하다.

수정 예시
 public class RankingProjectionDirtyAdapter implements RankingProjectionDirtyPort {
+	private static final int REASON_MAX_LENGTH = 32;
+	private static final String DEFAULT_REASON = "REDIS_WRITE_FAILED";
@@
 	public void markDirty(Set<LocalDate> dates, String reason) {
 		if (dates == null || dates.isEmpty()) {
 			return;
 		}
+		String normalizedReason = normalizeReason(reason);
@@
 			public void setValues(PreparedStatement ps, int i) throws SQLException {
 				ps.setDate(1, Date.valueOf(dateList.get(i)));
-				ps.setString(2, reason);
+				ps.setString(2, normalizedReason);
 				ps.setTimestamp(3, now);
 			}
@@
 	}
+
+	private String normalizeReason(String reason) {
+		if (reason == null || reason.isBlank()) return DEFAULT_REASON;
+		return reason.length() <= REASON_MAX_LENGTH
+			? reason
+			: reason.substring(0, REASON_MAX_LENGTH);
+	}
 }

As per coding guidelines, **/*.java: null 처리와 예외 흐름이 명확한지 점검한다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public void markDirty(Set<LocalDate> dates, String reason) {
if (dates == null || dates.isEmpty()) {
return;
}
List<LocalDate> dateList = new ArrayList<>(dates);
Timestamp now = Timestamp.valueOf(LocalDateTime.now());
jdbcTemplate.batchUpdate(UPSERT_SQL, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setDate(1, Date.valueOf(dateList.get(i)));
ps.setString(2, reason);
ps.setTimestamp(3, now);
public class RankingProjectionDirtyAdapter implements RankingProjectionDirtyPort {
private static final int REASON_MAX_LENGTH = 32;
private static final String DEFAULT_REASON = "REDIS_WRITE_FAILED";
public void markDirty(Set<LocalDate> dates, String reason) {
if (dates == null || dates.isEmpty()) {
return;
}
String normalizedReason = normalizeReason(reason);
List<LocalDate> dateList = new ArrayList<>(dates);
Timestamp now = Timestamp.valueOf(LocalDateTime.now());
jdbcTemplate.batchUpdate(UPSERT_SQL, new BatchPreparedStatementSetter() {
`@Override`
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setDate(1, Date.valueOf(dateList.get(i)));
ps.setString(2, normalizedReason);
ps.setTimestamp(3, now);
}
});
}
private String normalizeReason(String reason) {
if (reason == null || reason.isBlank()) return DEFAULT_REASON;
return reason.length() <= REASON_MAX_LENGTH
? reason
: reason.substring(0, REASON_MAX_LENGTH);
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/infrastructure/RankingProjectionDirtyAdapter.java`
around lines 47 - 60, The markDirty(Set<LocalDate> dates, String reason) path
can fail when reason is null/blank or >32 chars; update the markDirty
implementation to validate and normalize the reason before the JDBC batch: if
reason is null or blank replace with a safe default (e.g., "UNKNOWN" or an enum
name), trim it, and truncate to the DB limit (32 chars) before passing into
jdbcTemplate.batchUpdate/BatchPreparedStatementSetter (refer to markDirty,
UPSERT_SQL, jdbcTemplate.batchUpdate, BatchPreparedStatementSetter); also add
unit tests for null, blank, and over-32-char inputs to verify the adapter writes
a normalized reason and does not throw.

Comment on lines +79 to +94
// 3. DB 단일 TX — counter + organic_score + event_handled
if (!deltas.isEmpty()) {
rankingScoreService.persistDeltas(deltas, newEventIds);

// 4. Redis — best-effort projection
// TODO: S8 reconcile job 구현 전까지 Redis 실패 시 해당 날짜 ranking 누락 risk 존재.
// reconcile job 이 projection_dirty 를 소비하여 Redis 재생성하면 완전 복구됨.
try {
rankingScoreService.reflectToRedis(deltas);
} catch (Exception e) {
log.warn("[RankingCollector] Redis 반영 실패 → dirty mark. 해당 날짜 ranking 불완전", e);
Set<LocalDate> affectedDates = deltas.keySet().stream()
.map(RankingDailyKey::statDate)
.collect(Collectors.toSet());
rankingScoreService.markProjectionDirty(affectedDates);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

dirty mark 실패가 재시도로도 복구되지 않아 projection 누락이 영구화될 수 있다.

Line 81에서 DB와 event_handled가 먼저 반영된 뒤 Line 86-93에서 Redis 반영과 markProjectionDirty()가 모두 실패하면, 다음 재시도에서는 이미 처리된 이벤트로 간주되어 Line 72-73에서 모두 skip된다. 그 결과 dirty 날짜를 다시 남길 기회가 없어 reconcile 대상에서 빠지고, 운영에서는 Redis projection 누락이 장기간 남을 수 있다. markProjectionDirty()는 재전달 여부와 무관하게 반드시 기록되도록 별도 재시도 가능한 트랜잭션 서비스나 보상 작업으로 분리하는 편이 안전하다. "DB 성공 → Redis 실패 → dirty mark 실패 → 동일 배치 재전달" 시 최종적으로 dirty 레코드가 남는 통합 테스트가 필요하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-streamer/src/main/java/com/loopers/ranking/interfaces/consumer/RankingCollectorConsumer.java`
around lines 79 - 94, The current flow in RankingCollectorConsumer calls
rankingScoreService.persistDeltas(...) then attempts
rankingScoreService.reflectToRedis(...) and
rankingScoreService.markProjectionDirty(...), but if markProjectionDirty() fails
(after DB commit) the dirty flag can be permanently lost on retry; change the
implementation so markProjectionDirty is executed outside the DB transaction in
a retriable, idempotent path (e.g., push a durable task/message or call a
separate transactional service method that retries/uses an upsert within its own
transaction) whenever reflectToRedis() fails, ensure markProjectionDirty accepts
the Set<LocalDate> (from
deltas.keySet().stream().map(RankingDailyKey::statDate)) and is resilient to
duplicate calls, and add an integration test covering the sequence: DB commit
succeeds -> reflectToRedis throws -> markProjectionDirty initially fails -> on
replay the system still records the dirty date (i.e., ensure placement of the
durable task or retrying transactional service records the dirty mark).

@Hwan0518 Hwan0518 merged commit c9652e4 into Loopers-dev-lab:Hwan0518 Apr 13, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants