Skip to content

Instantly share code, notes, and snippets.

@buwilliams
Created March 28, 2025 14:27
Show Gist options
  • Save buwilliams/b6c371e37d538060685c3e3621654d04 to your computer and use it in GitHub Desktop.
Save buwilliams/b6c371e37d538060685c3e3621654d04 to your computer and use it in GitHub Desktop.
Study advice for maths useful for modeling the world and AI development

Given your background as a programmer with a solid grasp of algebra and your interest in AI research and Fermi calculations, your goal to fill in the gaps and apply math to quantitative modeling is both achievable and exciting. Fractional exponents like (60^{1/6}) (the 6th root of 60) hint at the broader world of mathematical concepts that can deepen your understanding and unlock new tools for your work. Since you’re aiming for efficiency and relevance to AI and Fermi-style problem-solving, I’ll tailor the recommendations to focus on key areas that align with your goals, along with practical study strategies.

Key Mathematical Concepts to Study

  1. Exponents and Logarithms (Expanding on Fractional Exponents)

    • Why it’s relevant: Fractional exponents connect to roots, growth rates, and scaling laws, which are common in AI (e.g., learning rates, optimization) and Fermi estimates (e.g., population growth, energy scales). Logarithms are essential for understanding complexity (e.g., (O(n \log n))) and transforming multiplicative relationships into additive ones, a frequent trick in modeling.
    • What to learn: Properties of exponents (including fractional and negative), logarithm rules, exponential growth/decay, and their interplay (e.g., (\log(a^b) = b \log a)).
    • Application: Quickly estimate (60^{1/6}) by approximating it as (64^{1/6} = 2) (since (2^6 = 64)), then adjust—a Fermi-style skill.
  2. Calculus (Differential and Integral Basics)

    • Why it’s relevant: AI research heavily relies on calculus for optimization (e.g., gradient descent) and understanding rates of change. Fermi calculations often approximate continuous processes (e.g., resource consumption over time).
    • What to learn: Derivatives (rates of change, chain rule), integrals (accumulation, area under curves), and the fundamental theorem linking them. Focus on intuition over rote computation—e.g., why gradients point to steeper slopes.
    • Application: Model how a neural network’s loss decreases or estimate total energy use from a rate.
  3. Linear Algebra

    • Why it’s relevant: The backbone of AI—neural networks are built on matrices, vectors, and transformations. It’s also key for data modeling and dimensionality reduction (e.g., PCA).
    • What to learn: Vectors, matrices, matrix multiplication, eigenvalues/eigenvectors, and basic operations like dot products. Grasp the geometric intuition (e.g., vectors as directions).
    • Application: Understand how weights in a neural net transform inputs or estimate multidimensional systems in Fermi problems.
  4. Probability and Statistics

    • Why it’s relevant: AI thrives on probabilistic models (e.g., Bayesian methods, uncertainty in predictions), and Fermi estimates often involve statistical reasoning (e.g., averages, distributions).
    • What to learn: Basic probability (conditional probability, Bayes’ theorem), random variables, distributions (normal, binomial), mean/variance, and sampling intuition.
    • Application: Quantify uncertainty in a model or estimate a population from a sample.
  5. Discrete Math and Combinatorics

    • Why it’s relevant: Useful for algorithmic thinking in AI (e.g., graph neural networks) and Fermi problems involving counting or arrangements (e.g., network connections).
    • What to learn: Sets, permutations/combinations, basic graph theory (nodes, edges), and recurrence relations.
    • Application: Estimate the number of possible outcomes or model a simple network.

Study Recommendations for Efficiency

To make those "neural adaptations" quickly and avoid getting bogged down, here’s how to approach this:

  1. Leverage Brilliant (Targeted Courses)

    • Brilliant is a great choice for your goals—it’s interactive, intuitive, and skips rote memorization for problem-solving. Start with:
      • “Algebra II” or “Mathematical Fundamentals”: Solidify exponents, logs, and algebraic manipulation.
      • “Calculus Fundamentals”: Quick intro to derivatives and integrals with real-world examples.
      • “Linear Algebra”: Focus on AI-relevant topics like matrix operations.
      • “Probability Fundamentals”: Build statistical intuition fast.
      • “Quantitative Reasoning” or “Scientific Thinking”: Perfect for Fermi-style estimation practice.
    • How to use it: Spend 20-30 minutes daily on one course, prioritizing practice problems over passive reading. Skip sections you already know (e.g., basic variables).
  2. Supplement with Focused Resources

    • 3Blue1Brown (YouTube): Visual explanations of calculus, linear algebra, and neural networks. Watch “Essence of Calculus” and “Linear Algebra” series—short, digestible, and AI-relevant.
    • Khan Academy: Use for quick drills on specific topics (e.g., logarithms, matrix multiplication) when you hit a gap. Search by topic, not full courses.
    • “Street-Fighting Mathematics” (Book by Sanjoy Mahajan): Free online, it’s perfect for Fermi-style thinking—teaches approximation and intuition over precision.
  3. Active Learning Techniques

    • Feynman Technique: After a concept (e.g., derivatives), explain it to yourself as if teaching a beginner. If you stumble, revisit it.
    • Spaced Repetition: Use flashcards (e.g., Anki) for key formulas or rules (e.g., (\frac{d}{dx} x^n = n x^{n-1})) to lock them in long-term.
    • Apply Immediately: After learning a topic, solve a toy problem—e.g., estimate the number of neurons in a brain (Fermi) or tweak a gradient descent equation (AI).
  4. Prioritize Practice Over Theory

    • Focus 80% of your time on problems, 20% on concepts. For example, after learning exponents, calculate (10^{1/3}) or approximate (2^{10}) mentally. For AI, implement a simple matrix operation in code.
  5. Timeboxing

    • Aim for 1-2 hours daily, split into 25-minute focused blocks (Pomodoro). Cover one topic per week (e.g., Week 1: Exponents/Logarithms, Week 2: Calculus Basics). In 2-3 months, you’ll have a robust toolkit.

Suggested Roadmap

  • Month 1: Exponents/Logarithms + Probability Basics (foundation for modeling and uncertainty).
  • Month 2: Calculus Basics + Linear Algebra (core AI tools).
  • Month 3: Statistics + Discrete Math (refine quantitative skills, tie into Fermi).

Final Tips

Since you’re surprised by things like (60^{1/6}), lean into that curiosity—play with numbers (e.g., guess (100^{1/4}), then check it’s (10)). For AI, code small projects (e.g., a linear regression model) to see math in action. For Fermi, practice estimating everyday things (e.g., cars in your city) and refine with data. You’ll build intuition fast this way.

Brilliant’s a solid start—jump in, and tweak based on what clicks. Let me know if you want specific problem examples or a deeper dive into any area!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment