Claude Code Plugins

Community-maintained marketplace

Feedback

Linear algebra operations in NumPy, including matrix multiplication, SVD, system solving, and least squares fitting. Triggers: linalg, matrix multiplication, SVD, eigenvalues, matrix decomposition, lstsq, multi_dot.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md


name: numpy-linalg description: Linear algebra operations in NumPy, including matrix multiplication, SVD, system solving, and least squares fitting. Triggers: linalg, matrix multiplication, SVD, eigenvalues, matrix decomposition, lstsq, multi_dot.

Overview

NumPy's linalg module provides high-performance implementations for standard linear algebra routines. It leverages optimized backends (like BLAS/LAPACK) for operations such as Singular Value Decomposition (SVD), batch equation solving, and least-squares optimization.

When to Use

  • Solving systems of linear equations ($Ax = b$).
  • Dimensionality reduction or matrix reconstruction using SVD.
  • Optimizing chains of matrix multiplications to reduce FLOPs.
  • Finding best-fit solutions for overdetermined systems.

Decision Tree

  1. Multiplying two matrices?
    • Use the @ operator (modern standard).
  2. Multiplying 3+ matrices?
    • Use np.linalg.multi_dot to optimize multiplication order.
  3. Is the matrix square and full-rank?
    • Yes: Use np.linalg.solve.
    • No: Use np.linalg.lstsq for a best-fit solution.

Workflows

  1. Solving Multiple Linear Systems in Batch

    • Organize coefficient matrices into a stack of shape (K, M, M).
    • Organize ordinate values into a stack of shape (K, M).
    • Call np.linalg.solve(a, b) to compute all solutions simultaneously.
  2. Rank-Reduced Reconstruction via SVD

    • Perform Singular Value Decomposition using np.linalg.svd(a, full_matrices=False).
    • Set small singular values in 's' to zero to perform noise reduction.
    • Reconstruct the matrix using (u * s) @ vh.
  3. Least Squares Fitting for Overdetermined Systems

    • Construct the matrix 'A' of variables and vector 'b' of results.
    • Use np.linalg.lstsq(A, b) to find the solution that minimizes the Euclidean 2-norm.
    • Retrieve the coefficients and residuals from the returned tuple.

Non-Obvious Insights

  • Modern Operator: The @ operator is preferred over dot() for 2D matrix products for readability and intent.
  • Multi-Dot Optimization: multi_dot uses dynamic programming to find the optimal parenthesization of matrix products, which can significantly speed up calculations with varying matrix sizes.
  • Batch Processing: Most linalg functions support "stacked" arrays, processing multiple independent problems in leading dimensions automatically.

Evidence

  • "The @ operator... is preferable to other methods when computing the matrix product between 2d arrays." Source
  • "a must be square and of full-rank... if either is not true, use lstsq for the least-squares best “solution”." Source

Scripts

  • scripts/numpy-linalg_tool.py: Demonstrates SVD reconstruction and batch solving.
  • scripts/numpy-linalg_tool.js: Simulated vector normalization logic.

Dependencies

  • numpy (Python)

References